All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-wired-lan] [jkirsher/next-queue PATCH 00/16] ixgbe/fm10k: macvlan fixes
@ 2017-11-22 18:56 Alexander Duyck
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 01/16] ixgbe: Fix interaction between SR-IOV and macvlan offload Alexander Duyck
                   ` (15 more replies)
  0 siblings, 16 replies; 33+ messages in thread
From: Alexander Duyck @ 2017-11-22 18:56 UTC (permalink / raw)
  To: intel-wired-lan

This patch series is meant to make it so that both fm10k and ixgbe drivers
will at least function when macvlan offload is enabled. Prior to these
patches both fm10k and ixgbe had numerous issues that had a negative impact
on driver stability with the offload enabled, or in the case of fm10k the
interfaces would never actually receive any traffic as the filters were
never configured correctly.

There are still a few issues outstanding after these patches, but I needed
to flush out what I had before this patch set became too large.

The next set of patches will include changes to the macvlan interface
itself and so I thought that would make a good division between this patch
set and the one to follow.

---

Alexander Duyck (16):
      ixgbe: Fix interaction between SR-IOV and macvlan offload
      ixgbe: Perform reinit any time number of VFs change
      ixgbe: Add support for macvlan offload RSS on X550 and clean-up pool handling
      ixgbe: There is no need to update num_rx_pools in L2 fwd offload
      ixgbe: Fix limitations on macvlan so we can support up to 63 offloaded devices
      ixgbe: Use ring values to test for Tx pending
      ixgbe: Drop l2_accel_priv data pointer from ring struct
      ixgbe: Assume provided MAC filter has been verified by macvlan
      ixgbe: Default to 1 pool always being allocated
      ixgbe: Don't assume dev->num_tc is equal to hardware TC config
      ixgbe/fm10k: Record macvlan stats instead of Rx queue for macvlan offloaded rings
      ixgbe: Do not manipulate macvlan Tx queues when performing macvlan offload
      ixgbe: avoid bringing rings up/down as macvlans are added/removed
      ixgbe: Fix handling of macvlan Tx offload
      net: Cap number of queues even with accel_priv
      fm10k: Fix configuration for macvlan offload


 drivers/net/ethernet/intel/fm10k/fm10k_main.c    |   14 -
 drivers/net/ethernet/intel/fm10k/fm10k_netdev.c  |   25 +
 drivers/net/ethernet/intel/ixgbe/ixgbe.h         |    8 
 drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c  |    2 
 drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c |    6 
 drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c     |   72 ++--
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c    |  407 ++++++++--------------
 drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c   |   63 +--
 net/core/dev.c                                   |    3 
 9 files changed, 256 insertions(+), 344 deletions(-)

--

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 01/16] ixgbe: Fix interaction between SR-IOV and macvlan offload
  2017-11-22 18:56 [Intel-wired-lan] [jkirsher/next-queue PATCH 00/16] ixgbe/fm10k: macvlan fixes Alexander Duyck
@ 2017-11-22 18:56 ` Alexander Duyck
  2017-11-29 16:40   ` Bowers, AndrewX
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 02/16] ixgbe: Perform reinit any time number of VFs change Alexander Duyck
                   ` (14 subsequent siblings)
  15 siblings, 1 reply; 33+ messages in thread
From: Alexander Duyck @ 2017-11-22 18:56 UTC (permalink / raw)
  To: intel-wired-lan

From: Alexander Duyck <alexander.h.duyck@intel.com>

When SR-IOV was enabled the macvlan offload was configuring several filters
with the wrong pool value. This would result in the macvlan interfaces not
being able to receive traffic that had to pass over the physical interface.

To fix it wrap the pool argument in the VMDQ_P macro which will add the
necessary offset to get to the actual VMDq pool

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index f52ad0d0782f..9c6d4926a136 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -5426,10 +5426,11 @@ static int ixgbe_fwd_ring_up(struct net_device *vdev,
 		goto fwd_queue_err;
 
 	if (is_valid_ether_addr(vdev->dev_addr))
-		ixgbe_add_mac_filter(adapter, vdev->dev_addr, accel->pool);
+		ixgbe_add_mac_filter(adapter, vdev->dev_addr,
+				     VMDQ_P(accel->pool));
 
 	ixgbe_fwd_psrtype(accel);
-	ixgbe_macvlan_set_rx_mode(vdev, accel->pool, adapter);
+	ixgbe_macvlan_set_rx_mode(vdev, VMDQ_P(accel->pool), adapter);
 	return err;
 fwd_queue_err:
 	ixgbe_fwd_ring_down(vdev, accel);
@@ -9034,6 +9035,7 @@ static int get_macvlan_queue(struct net_device *upper, void *_data)
 static int handle_redirect_action(struct ixgbe_adapter *adapter, int ifindex,
 				  u8 *queue, u64 *action)
 {
+	struct ixgbe_ring_feature *vmdq = &adapter->ring_feature[RING_F_VMDQ];
 	unsigned int num_vfs = adapter->num_vfs, vf;
 	struct upper_walk_data data;
 	struct net_device *upper;
@@ -9042,11 +9044,7 @@ static int handle_redirect_action(struct ixgbe_adapter *adapter, int ifindex,
 	for (vf = 0; vf < num_vfs; ++vf) {
 		upper = pci_get_drvdata(adapter->vfinfo[vf].vfdev);
 		if (upper->ifindex == ifindex) {
-			if (adapter->num_rx_pools > 1)
-				*queue = vf * 2;
-			else
-				*queue = vf * adapter->num_rx_queues_per_pool;
-
+			*queue = vf * __ALIGN_MASK(1, ~vmdq->mask);
 			*action = vf + 1;
 			*action <<= ETHTOOL_RX_FLOW_SPEC_RING_VF_OFF;
 			return 0;


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 02/16] ixgbe: Perform reinit any time number of VFs change
  2017-11-22 18:56 [Intel-wired-lan] [jkirsher/next-queue PATCH 00/16] ixgbe/fm10k: macvlan fixes Alexander Duyck
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 01/16] ixgbe: Fix interaction between SR-IOV and macvlan offload Alexander Duyck
@ 2017-11-22 18:56 ` Alexander Duyck
  2017-11-29 17:00   ` Bowers, AndrewX
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 03/16] ixgbe: Add support for macvlan offload RSS on X550 and clean-up pool handling Alexander Duyck
                   ` (13 subsequent siblings)
  15 siblings, 1 reply; 33+ messages in thread
From: Alexander Duyck @ 2017-11-22 18:56 UTC (permalink / raw)
  To: intel-wired-lan

From: Alexander Duyck <alexander.h.duyck@intel.com>

If the number of VFs are changed we need to reinitialize the part since the
offset for the device and the number of pools will be incorrect. Without
this change we can end up seeing Tx hangs and dropped Rx frames for
incoming traffic.

In addition we should drop the code that is arbitrarily changing the
default pool and queue configuration. Instead we should wait until the port
is reset and reconfigured via ixgbe_sriov_reinit.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c |   19 +++----------------
 1 file changed, 3 insertions(+), 16 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
index 112d24c6c9ce..15d89258fbc3 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
@@ -227,9 +227,6 @@ void ixgbe_enable_sriov(struct ixgbe_adapter *adapter, unsigned int max_vfs)
 int ixgbe_disable_sriov(struct ixgbe_adapter *adapter)
 {
 	unsigned int num_vfs = adapter->num_vfs, vf;
-	struct ixgbe_hw *hw = &adapter->hw;
-	u32 gpie;
-	u32 vmdctl;
 	int rss;
 
 	/* set num VFs to 0 to prevent access to vfinfo */
@@ -271,18 +268,6 @@ int ixgbe_disable_sriov(struct ixgbe_adapter *adapter)
 	pci_disable_sriov(adapter->pdev);
 #endif
 
-	/* turn off device IOV mode */
-	IXGBE_WRITE_REG(hw, IXGBE_GCR_EXT, 0);
-	gpie = IXGBE_READ_REG(hw, IXGBE_GPIE);
-	gpie &= ~IXGBE_GPIE_VTMODE_MASK;
-	IXGBE_WRITE_REG(hw, IXGBE_GPIE, gpie);
-
-	/* set default pool back to 0 */
-	vmdctl = IXGBE_READ_REG(hw, IXGBE_VT_CTL);
-	vmdctl &= ~IXGBE_VT_CTL_POOL_MASK;
-	IXGBE_WRITE_REG(hw, IXGBE_VT_CTL, vmdctl);
-	IXGBE_WRITE_FLUSH(hw);
-
 	/* Disable VMDq flag so device will be set in VM mode */
 	if (adapter->ring_feature[RING_F_VMDQ].limit == 1) {
 		adapter->flags &= ~IXGBE_FLAG_VMDQ_ENABLED;
@@ -378,13 +363,15 @@ static int ixgbe_pci_sriov_disable(struct pci_dev *dev)
 	int err;
 #ifdef CONFIG_PCI_IOV
 	u32 current_flags = adapter->flags;
+	int prev_num_vf = pci_num_vf(dev);
 #endif
 
 	err = ixgbe_disable_sriov(adapter);
 
 	/* Only reinit if no error and state changed */
 #ifdef CONFIG_PCI_IOV
-	if (!err && current_flags != adapter->flags)
+	if (!err && (current_flags != adapter->flags ||
+		     prev_num_vf != pci_num_vf(dev)))
 		ixgbe_sriov_reinit(adapter);
 #endif
 


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 03/16] ixgbe: Add support for macvlan offload RSS on X550 and clean-up pool handling
  2017-11-22 18:56 [Intel-wired-lan] [jkirsher/next-queue PATCH 00/16] ixgbe/fm10k: macvlan fixes Alexander Duyck
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 01/16] ixgbe: Fix interaction between SR-IOV and macvlan offload Alexander Duyck
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 02/16] ixgbe: Perform reinit any time number of VFs change Alexander Duyck
@ 2017-11-22 18:56 ` Alexander Duyck
  2017-11-29 17:01   ` Bowers, AndrewX
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 04/16] ixgbe: There is no need to update num_rx_pools in L2 fwd offload Alexander Duyck
                   ` (12 subsequent siblings)
  15 siblings, 1 reply; 33+ messages in thread
From: Alexander Duyck @ 2017-11-22 18:56 UTC (permalink / raw)
  To: intel-wired-lan

From: Alexander Duyck <alexander.h.duyck@intel.com>

In order for RSS to work on the macvlan pools of the X550 we need to
populate the MRQC, RETA, and RSS key values for each pool. This patch makes
it so that we now take care of that.

In addition I have dropped the macvlan specific configuration of psrtype
since it is redundant with the code that already exists for configuring
this value.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   62 ++++++++++---------------
 1 file changed, 25 insertions(+), 37 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 9c6d4926a136..060474747ecc 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -3844,16 +3844,20 @@ static void ixgbe_store_vfreta(struct ixgbe_adapter *adapter)
 	u32 i, reta_entries = ixgbe_rss_indir_tbl_entries(adapter);
 	struct ixgbe_hw *hw = &adapter->hw;
 	u32 vfreta = 0;
-	unsigned int pf_pool = adapter->num_vfs;
 
 	/* Write redirection table to HW */
 	for (i = 0; i < reta_entries; i++) {
+		u16 pool = adapter->num_rx_pools;
+
 		vfreta |= (u32)adapter->rss_indir_tbl[i] << (i & 0x3) * 8;
-		if ((i & 3) == 3) {
-			IXGBE_WRITE_REG(hw, IXGBE_PFVFRETA(i >> 2, pf_pool),
+		if ((i & 3) != 3)
+			continue;
+
+		while (pool--)
+			IXGBE_WRITE_REG(hw,
+					IXGBE_PFVFRETA(i >> 2, VMDQ_P(pool)),
 					vfreta);
-			vfreta = 0;
-		}
+		vfreta = 0;
 	}
 }
 
@@ -3890,13 +3894,17 @@ static void ixgbe_setup_vfreta(struct ixgbe_adapter *adapter)
 {
 	struct ixgbe_hw *hw = &adapter->hw;
 	u16 rss_i = adapter->ring_feature[RING_F_RSS].indices;
-	unsigned int pf_pool = adapter->num_vfs;
 	int i, j;
 
 	/* Fill out hash function seeds */
-	for (i = 0; i < 10; i++)
-		IXGBE_WRITE_REG(hw, IXGBE_PFVFRSSRK(i, pf_pool),
-				*(adapter->rss_key + i));
+	for (i = 0; i < 10; i++) {
+		u16 pool = adapter->num_rx_pools;
+
+		while (pool--)
+			IXGBE_WRITE_REG(hw,
+					IXGBE_PFVFRSSRK(i, VMDQ_P(pool)),
+					*(adapter->rss_key + i));
+	}
 
 	/* Fill out the redirection table */
 	for (i = 0, j = 0; i < 64; i++, j++) {
@@ -3962,7 +3970,7 @@ static void ixgbe_setup_mrqc(struct ixgbe_adapter *adapter)
 
 	if ((hw->mac.type >= ixgbe_mac_X550) &&
 	    (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED)) {
-		unsigned int pf_pool = adapter->num_vfs;
+		u16 pool = adapter->num_rx_pools;
 
 		/* Enable VF RSS mode */
 		mrqc |= IXGBE_MRQC_MULTIPLE_RSS;
@@ -3972,7 +3980,11 @@ static void ixgbe_setup_mrqc(struct ixgbe_adapter *adapter)
 		ixgbe_setup_vfreta(adapter);
 		vfmrqc = IXGBE_MRQC_RSSEN;
 		vfmrqc |= rss_field;
-		IXGBE_WRITE_REG(hw, IXGBE_PFVFMRQC(pf_pool), vfmrqc);
+
+		while (pool--)
+			IXGBE_WRITE_REG(hw,
+					IXGBE_PFVFMRQC(VMDQ_P(pool)),
+					vfmrqc);
 	} else {
 		ixgbe_setup_reta(adapter);
 		mrqc |= rss_field;
@@ -4135,7 +4147,7 @@ static void ixgbe_setup_psrtype(struct ixgbe_adapter *adapter)
 {
 	struct ixgbe_hw *hw = &adapter->hw;
 	int rss_i = adapter->ring_feature[RING_F_RSS].indices;
-	u16 pool;
+	u16 pool = adapter->num_rx_pools;
 
 	/* PSRTYPE must be initialized in non 82598 adapters */
 	u32 psrtype = IXGBE_PSRTYPE_TCPHDR |
@@ -4152,7 +4164,7 @@ static void ixgbe_setup_psrtype(struct ixgbe_adapter *adapter)
 	else if (rss_i > 1)
 		psrtype |= 1u << 29;
 
-	for_each_set_bit(pool, &adapter->fwd_bitmask, 32)
+	while (pool--)
 		IXGBE_WRITE_REG(hw, IXGBE_PSRTYPE(VMDQ_P(pool)), psrtype);
 }
 
@@ -5268,29 +5280,6 @@ static void ixgbe_macvlan_set_rx_mode(struct net_device *dev, unsigned int pool,
 	IXGBE_WRITE_REG(hw, IXGBE_VMOLR(pool), vmolr);
 }
 
-static void ixgbe_fwd_psrtype(struct ixgbe_fwd_adapter *vadapter)
-{
-	struct ixgbe_adapter *adapter = vadapter->real_adapter;
-	int rss_i = adapter->num_rx_queues_per_pool;
-	struct ixgbe_hw *hw = &adapter->hw;
-	u16 pool = vadapter->pool;
-	u32 psrtype = IXGBE_PSRTYPE_TCPHDR |
-		      IXGBE_PSRTYPE_UDPHDR |
-		      IXGBE_PSRTYPE_IPV4HDR |
-		      IXGBE_PSRTYPE_L2HDR |
-		      IXGBE_PSRTYPE_IPV6HDR;
-
-	if (hw->mac.type == ixgbe_mac_82598EB)
-		return;
-
-	if (rss_i > 3)
-		psrtype |= 2u << 29;
-	else if (rss_i > 1)
-		psrtype |= 1u << 29;
-
-	IXGBE_WRITE_REG(hw, IXGBE_PSRTYPE(VMDQ_P(pool)), psrtype);
-}
-
 /**
  * ixgbe_clean_rx_ring - Free Rx Buffers per Queue
  * @rx_ring: ring to free buffers from
@@ -5429,7 +5418,6 @@ static int ixgbe_fwd_ring_up(struct net_device *vdev,
 		ixgbe_add_mac_filter(adapter, vdev->dev_addr,
 				     VMDQ_P(accel->pool));
 
-	ixgbe_fwd_psrtype(accel);
 	ixgbe_macvlan_set_rx_mode(vdev, VMDQ_P(accel->pool), adapter);
 	return err;
 fwd_queue_err:


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 04/16] ixgbe: There is no need to update num_rx_pools in L2 fwd offload
  2017-11-22 18:56 [Intel-wired-lan] [jkirsher/next-queue PATCH 00/16] ixgbe/fm10k: macvlan fixes Alexander Duyck
                   ` (2 preceding siblings ...)
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 03/16] ixgbe: Add support for macvlan offload RSS on X550 and clean-up pool handling Alexander Duyck
@ 2017-11-22 18:56 ` Alexander Duyck
  2017-11-29 17:02   ` Bowers, AndrewX
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 05/16] ixgbe: Fix limitations on macvlan so we can support up to 63 offloaded devices Alexander Duyck
                   ` (11 subsequent siblings)
  15 siblings, 1 reply; 33+ messages in thread
From: Alexander Duyck @ 2017-11-22 18:56 UTC (permalink / raw)
  To: intel-wired-lan

From: Alexander Duyck <alexander.h.duyck@intel.com>

The num_rx_pools value is overwritten when we reinitialize the queue
configuration. In reality we shouldn't need to be updating the value since
it is redone every time we call into ixgbe_setup_tc so for now just drop
the spots where we were incrementing or decrementing the value.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  |    2 +-
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |    3 ---
 2 files changed, 1 insertion(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index 8e2a957aca18..56622adc76dc 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -701,7 +701,7 @@ static void ixgbe_set_num_queues(struct ixgbe_adapter *adapter)
 	adapter->num_rx_queues = 1;
 	adapter->num_tx_queues = 1;
 	adapter->num_xdp_queues = 0;
-	adapter->num_rx_pools = adapter->num_rx_queues;
+	adapter->num_rx_pools = 1;
 	adapter->num_rx_queues_per_pool = 1;
 
 #ifdef CONFIG_IXGBE_DCB
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 060474747ecc..c1df873faf68 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -9833,7 +9833,6 @@ static void *ixgbe_fwd_add(struct net_device *pdev, struct net_device *vdev)
 		return ERR_PTR(-ENOMEM);
 
 	pool = find_first_zero_bit(&adapter->fwd_bitmask, 32);
-	adapter->num_rx_pools++;
 	set_bit(pool, &adapter->fwd_bitmask);
 	limit = find_last_bit(&adapter->fwd_bitmask, 32);
 
@@ -9862,7 +9861,6 @@ static void *ixgbe_fwd_add(struct net_device *pdev, struct net_device *vdev)
 	netdev_info(pdev,
 		    "%s: dfwd hardware acceleration failed\n", vdev->name);
 	clear_bit(pool, &adapter->fwd_bitmask);
-	adapter->num_rx_pools--;
 	kfree(fwd_adapter);
 	return ERR_PTR(err);
 }
@@ -9874,7 +9872,6 @@ static void ixgbe_fwd_del(struct net_device *pdev, void *priv)
 	unsigned int limit;
 
 	clear_bit(fwd_adapter->pool, &adapter->fwd_bitmask);
-	adapter->num_rx_pools--;
 
 	limit = find_last_bit(&adapter->fwd_bitmask, 32);
 	adapter->ring_feature[RING_F_VMDQ].limit = limit + 1;


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 05/16] ixgbe: Fix limitations on macvlan so we can support up to 63 offloaded devices
  2017-11-22 18:56 [Intel-wired-lan] [jkirsher/next-queue PATCH 00/16] ixgbe/fm10k: macvlan fixes Alexander Duyck
                   ` (3 preceding siblings ...)
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 04/16] ixgbe: There is no need to update num_rx_pools in L2 fwd offload Alexander Duyck
@ 2017-11-22 18:56 ` Alexander Duyck
  2017-11-29 17:03   ` Bowers, AndrewX
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 06/16] ixgbe: Use ring values to test for Tx pending Alexander Duyck
                   ` (10 subsequent siblings)
  15 siblings, 1 reply; 33+ messages in thread
From: Alexander Duyck @ 2017-11-22 18:56 UTC (permalink / raw)
  To: intel-wired-lan

From: Alexander Duyck <alexander.h.duyck@intel.com>

This change is a fix of the macvlan offload so that we correctly handle
macvlan offloaded devices. Specificaly we were configuring our limits based
on the assumption that we were going to max out the RSS indices for every
mode. As a result when we went to 15 or more macvlan interfaces we were
forced into the 2 queue RSS mode on VFs even though they could have still
supported 4.

This change splits the logic up so that we limit either the total number of
macvlan instances if DCB is enabled, or limit the number of RSS queues used
per macvlan (instead of per pool) if SR-IOV is enabled. By doing this we
can make best use of the part.

In addition I have increased the maximum number of supported interfaces to
63 with one queue per offloaded interface as this more closely reflects the
actual values supported by the interface.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe.h       |    6 ++--
 drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c   |    9 +++++-
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c  |   35 ++++++++++--------------
 drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c |   27 ++++++-------------
 4 files changed, 34 insertions(+), 43 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
index 92a784bd2ca2..7a421b70afce 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
@@ -395,8 +395,7 @@ enum ixgbe_ring_f_enum {
 #define MAX_XDP_QUEUES			(IXGBE_MAX_FDIR_INDICES + 1)
 #define IXGBE_MAX_L2A_QUEUES		4
 #define IXGBE_BAD_L2A_QUEUE		3
-#define IXGBE_MAX_MACVLANS		31
-#define IXGBE_MAX_DCBMACVLANS		8
+#define IXGBE_MAX_MACVLANS		63
 
 struct ixgbe_ring_feature {
 	u16 limit;	/* upper limit on feature indices */
@@ -765,7 +764,8 @@ struct ixgbe_adapter {
 #endif /*CONFIG_DEBUG_FS*/
 
 	u8 default_up;
-	unsigned long fwd_bitmask; /* Bitmask indicating in use pools */
+	/* Bitmask indicating in use pools */
+	DECLARE_BITMAP(fwd_bitmask, IXGBE_MAX_MACVLANS + 1);
 
 #define IXGBE_MAX_LINK_HANDLE 10
 	struct ixgbe_jump_table *jump_tables[IXGBE_MAX_LINK_HANDLE];
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index 56622adc76dc..cceafbc3f1db 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -350,6 +350,9 @@ static bool ixgbe_set_dcb_sriov_queues(struct ixgbe_adapter *adapter)
 	if (!(adapter->flags & IXGBE_FLAG_SRIOV_ENABLED))
 		return false;
 
+	/* limit VMDq instances on the PF by number of Tx queues */
+	vmdq_i = min_t(u16, vmdq_i, MAX_TX_QUEUES / tcs);
+
 	/* Add starting offset to total pool count */
 	vmdq_i += adapter->ring_feature[RING_F_VMDQ].offset;
 
@@ -512,12 +515,14 @@ static bool ixgbe_set_sriov_queues(struct ixgbe_adapter *adapter)
 #ifdef IXGBE_FCOE
 	u16 fcoe_i = 0;
 #endif
-	bool pools = (find_first_zero_bit(&adapter->fwd_bitmask, 32) > 1);
 
 	/* only proceed if SR-IOV is enabled */
 	if (!(adapter->flags & IXGBE_FLAG_SRIOV_ENABLED))
 		return false;
 
+	/* limit l2fwd RSS based on total Tx queue limit */
+	rss_i = min_t(u16, rss_i, MAX_TX_QUEUES / vmdq_i);
+
 	/* Add starting offset to total pool count */
 	vmdq_i += adapter->ring_feature[RING_F_VMDQ].offset;
 
@@ -525,7 +530,7 @@ static bool ixgbe_set_sriov_queues(struct ixgbe_adapter *adapter)
 	vmdq_i = min_t(u16, IXGBE_MAX_VMDQ_INDICES, vmdq_i);
 
 	/* 64 pool mode with 2 queues per pool */
-	if ((vmdq_i > 32) || (vmdq_i > 16 && pools)) {
+	if (vmdq_i > 32) {
 		vmdq_m = IXGBE_82599_VMDQ_2Q_MASK;
 		rss_m = IXGBE_RSS_2Q_MASK;
 		rss_i = min_t(u16, rss_i, 2);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index c1df873faf68..101b3521ab0b 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -5377,14 +5377,13 @@ static int ixgbe_fwd_ring_up(struct net_device *vdev,
 	unsigned int rxbase, txbase, queues;
 	int i, baseq, err = 0;
 
-	if (!test_bit(accel->pool, &adapter->fwd_bitmask))
+	if (!test_bit(accel->pool, adapter->fwd_bitmask))
 		return 0;
 
 	baseq = accel->pool * adapter->num_rx_queues_per_pool;
-	netdev_dbg(vdev, "pool %i:%i queues %i:%i VSI bitmask %lx\n",
+	netdev_dbg(vdev, "pool %i:%i queues %i:%i\n",
 		   accel->pool, adapter->num_rx_pools,
-		   baseq, baseq + adapter->num_rx_queues_per_pool,
-		   adapter->fwd_bitmask);
+		   baseq, baseq + adapter->num_rx_queues_per_pool);
 
 	accel->netdev = vdev;
 	accel->rx_base_queue = rxbase = baseq;
@@ -6282,7 +6281,7 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter,
 	}
 
 	/* PF holds first pool slot */
-	set_bit(0, &adapter->fwd_bitmask);
+	set_bit(0, adapter->fwd_bitmask);
 	set_bit(__IXGBE_DOWN, &adapter->state);
 
 	return 0;
@@ -8848,7 +8847,6 @@ int ixgbe_setup_tc(struct net_device *dev, u8 tc)
 {
 	struct ixgbe_adapter *adapter = netdev_priv(dev);
 	struct ixgbe_hw *hw = &adapter->hw;
-	bool pools;
 
 	/* Hardware supports up to 8 traffic classes */
 	if (tc > adapter->dcb_cfg.num_tcs.pg_tcs)
@@ -8857,10 +8855,6 @@ int ixgbe_setup_tc(struct net_device *dev, u8 tc)
 	if (hw->mac.type == ixgbe_mac_82598EB && tc && tc < MAX_TRAFFIC_CLASS)
 		return -EINVAL;
 
-	pools = (find_first_zero_bit(&adapter->fwd_bitmask, 32) > 1);
-	if (tc && pools && adapter->num_rx_pools > IXGBE_MAX_DCBMACVLANS)
-		return -EBUSY;
-
 	/* Hardware has to reinitialize queues and interrupts to
 	 * match packet buffer alignment. Unfortunately, the
 	 * hardware is not flexible enough to do this dynamically.
@@ -9797,6 +9791,7 @@ static void *ixgbe_fwd_add(struct net_device *pdev, struct net_device *vdev)
 	struct ixgbe_fwd_adapter *fwd_adapter = NULL;
 	struct ixgbe_adapter *adapter = netdev_priv(pdev);
 	int used_pools = adapter->num_vfs + adapter->num_rx_pools;
+	int tcs = netdev_get_num_tc(pdev) ? : 1;
 	unsigned int limit;
 	int pool, err;
 
@@ -9824,7 +9819,7 @@ static void *ixgbe_fwd_add(struct net_device *pdev, struct net_device *vdev)
 	}
 
 	if (((adapter->flags & IXGBE_FLAG_DCB_ENABLED) &&
-	      adapter->num_rx_pools > IXGBE_MAX_DCBMACVLANS - 1) ||
+	      adapter->num_rx_pools >= (MAX_TX_QUEUES / tcs)) ||
 	    (adapter->num_rx_pools > IXGBE_MAX_MACVLANS))
 		return ERR_PTR(-EBUSY);
 
@@ -9832,9 +9827,9 @@ static void *ixgbe_fwd_add(struct net_device *pdev, struct net_device *vdev)
 	if (!fwd_adapter)
 		return ERR_PTR(-ENOMEM);
 
-	pool = find_first_zero_bit(&adapter->fwd_bitmask, 32);
-	set_bit(pool, &adapter->fwd_bitmask);
-	limit = find_last_bit(&adapter->fwd_bitmask, 32);
+	pool = find_first_zero_bit(adapter->fwd_bitmask, adapter->num_rx_pools);
+	set_bit(pool, adapter->fwd_bitmask);
+	limit = find_last_bit(adapter->fwd_bitmask, adapter->num_rx_pools + 1);
 
 	/* Enable VMDq flag so device will be set in VM mode */
 	adapter->flags |= IXGBE_FLAG_VMDQ_ENABLED | IXGBE_FLAG_SRIOV_ENABLED;
@@ -9860,7 +9855,7 @@ static void *ixgbe_fwd_add(struct net_device *pdev, struct net_device *vdev)
 	/* unwind counter and free adapter struct */
 	netdev_info(pdev,
 		    "%s: dfwd hardware acceleration failed\n", vdev->name);
-	clear_bit(pool, &adapter->fwd_bitmask);
+	clear_bit(pool, adapter->fwd_bitmask);
 	kfree(fwd_adapter);
 	return ERR_PTR(err);
 }
@@ -9871,9 +9866,9 @@ static void ixgbe_fwd_del(struct net_device *pdev, void *priv)
 	struct ixgbe_adapter *adapter = fwd_adapter->real_adapter;
 	unsigned int limit;
 
-	clear_bit(fwd_adapter->pool, &adapter->fwd_bitmask);
+	clear_bit(fwd_adapter->pool, adapter->fwd_bitmask);
 
-	limit = find_last_bit(&adapter->fwd_bitmask, 32);
+	limit = find_last_bit(adapter->fwd_bitmask, adapter->num_rx_pools);
 	adapter->ring_feature[RING_F_VMDQ].limit = limit + 1;
 	ixgbe_fwd_ring_down(fwd_adapter->netdev, fwd_adapter);
 
@@ -9888,11 +9883,11 @@ static void ixgbe_fwd_del(struct net_device *pdev, void *priv)
 	}
 
 	ixgbe_setup_tc(pdev, netdev_get_num_tc(pdev));
-	netdev_dbg(pdev, "pool %i:%i queues %i:%i VSI bitmask %lx\n",
+	netdev_dbg(pdev, "pool %i:%i queues %i:%i\n",
 		   fwd_adapter->pool, adapter->num_rx_pools,
 		   fwd_adapter->rx_base_queue,
-		   fwd_adapter->rx_base_queue + adapter->num_rx_queues_per_pool,
-		   adapter->fwd_bitmask);
+		   fwd_adapter->rx_base_queue +
+		   adapter->num_rx_queues_per_pool);
 	kfree(fwd_adapter);
 }
 
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
index 15d89258fbc3..0085f4632966 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
@@ -290,10 +290,9 @@ static int ixgbe_pci_sriov_enable(struct pci_dev *dev, int num_vfs)
 {
 #ifdef CONFIG_PCI_IOV
 	struct ixgbe_adapter *adapter = pci_get_drvdata(dev);
-	int err = 0;
-	u8 num_tc;
-	int i;
 	int pre_existing_vfs = pci_num_vf(dev);
+	int err = 0, num_rx_pools, i, limit;
+	u8 num_tc;
 
 	if (pre_existing_vfs && pre_existing_vfs != num_vfs)
 		err = ixgbe_disable_sriov(adapter);
@@ -316,22 +315,14 @@ static int ixgbe_pci_sriov_enable(struct pci_dev *dev, int num_vfs)
 	 * other values out of range.
 	 */
 	num_tc = netdev_get_num_tc(adapter->netdev);
+	num_rx_pools = adapter->num_rx_pools;
+	limit = (num_tc > 4) ? IXGBE_MAX_VFS_8TC :
+		(num_tc > 1) ? IXGBE_MAX_VFS_4TC : IXGBE_MAX_VFS_1TC;
 
-	if (num_tc > 4) {
-		if ((num_vfs + adapter->num_rx_pools) > IXGBE_MAX_VFS_8TC) {
-			e_dev_err("Currently the device is configured with %d TCs, Creating more than %d VFs is not allowed\n", num_tc, IXGBE_MAX_VFS_8TC);
-			return -EPERM;
-		}
-	} else if ((num_tc > 1) && (num_tc <= 4)) {
-		if ((num_vfs + adapter->num_rx_pools) > IXGBE_MAX_VFS_4TC) {
-			e_dev_err("Currently the device is configured with %d TCs, Creating more than %d VFs is not allowed\n", num_tc, IXGBE_MAX_VFS_4TC);
-			return -EPERM;
-		}
-	} else {
-		if ((num_vfs + adapter->num_rx_pools) > IXGBE_MAX_VFS_1TC) {
-			e_dev_err("Currently the device is configured with %d TCs, Creating more than %d VFs is not allowed\n", num_tc, IXGBE_MAX_VFS_1TC);
-			return -EPERM;
-		}
+	if (num_vfs > (limit - num_rx_pools)) {
+		e_dev_err("Currently configured with %d TCs, and %d offloaded macvlans. Creating more than %d VFs is not allowed\n",
+			  num_tc, num_rx_pools - 1, limit - num_rx_pools);
+		return -EPERM;
 	}
 
 	err = __ixgbe_enable_sriov(adapter, num_vfs);


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 06/16] ixgbe: Use ring values to test for Tx pending
  2017-11-22 18:56 [Intel-wired-lan] [jkirsher/next-queue PATCH 00/16] ixgbe/fm10k: macvlan fixes Alexander Duyck
                   ` (4 preceding siblings ...)
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 05/16] ixgbe: Fix limitations on macvlan so we can support up to 63 offloaded devices Alexander Duyck
@ 2017-11-22 18:56 ` Alexander Duyck
  2017-11-29 17:04   ` Bowers, AndrewX
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 07/16] ixgbe: Drop l2_accel_priv data pointer from ring struct Alexander Duyck
                   ` (9 subsequent siblings)
  15 siblings, 1 reply; 33+ messages in thread
From: Alexander Duyck @ 2017-11-22 18:56 UTC (permalink / raw)
  To: intel-wired-lan

From: Alexander Duyck <alexander.h.duyck@intel.com>

This patch simplifies the check for Tx pending traffic and makes it more
holistic as there being any difference between next_to_use and
next_to_clean is much more informative than if head and tail are equal, as
it is possible for us to either not update tail, or not be notified of
completed work in which case next_to_clean would not be equal to head.

In addition the simplification makes it so that we don't have to read
hardware which allows us to drop a number of variables that were previously
being used in the call.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   20 ++++----------------
 1 file changed, 4 insertions(+), 16 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 101b3521ab0b..69bababc0cf6 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -1064,24 +1064,12 @@ static u64 ixgbe_get_tx_completed(struct ixgbe_ring *ring)
 
 static u64 ixgbe_get_tx_pending(struct ixgbe_ring *ring)
 {
-	struct ixgbe_adapter *adapter;
-	struct ixgbe_hw *hw;
-	u32 head, tail;
+	unsigned int head, tail;
 
-	if (ring->l2_accel_priv)
-		adapter = ring->l2_accel_priv->real_adapter;
-	else
-		adapter = netdev_priv(ring->netdev);
+	head = ring->next_to_clean;
+	tail = ring->next_to_use;
 
-	hw = &adapter->hw;
-	head = IXGBE_READ_REG(hw, IXGBE_TDH(ring->reg_idx));
-	tail = IXGBE_READ_REG(hw, IXGBE_TDT(ring->reg_idx));
-
-	if (head != tail)
-		return (head < tail) ?
-			tail - head : (tail + ring->count - head);
-
-	return 0;
+	return ((head <= tail) ? tail : tail + ring->count) - head;
 }
 
 static inline bool ixgbe_check_tx_hang(struct ixgbe_ring *tx_ring)


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 07/16] ixgbe: Drop l2_accel_priv data pointer from ring struct
  2017-11-22 18:56 [Intel-wired-lan] [jkirsher/next-queue PATCH 00/16] ixgbe/fm10k: macvlan fixes Alexander Duyck
                   ` (5 preceding siblings ...)
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 06/16] ixgbe: Use ring values to test for Tx pending Alexander Duyck
@ 2017-11-22 18:56 ` Alexander Duyck
  2017-11-29 17:04   ` Bowers, AndrewX
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 08/16] ixgbe: Assume provided MAC filter has been verified by macvlan Alexander Duyck
                   ` (8 subsequent siblings)
  15 siblings, 1 reply; 33+ messages in thread
From: Alexander Duyck @ 2017-11-22 18:56 UTC (permalink / raw)
  To: intel-wired-lan

From: Alexander Duyck <alexander.h.duyck@intel.com>

The l2 acceleration private pointer isn't needed in the ring struct. It
isn't really used anywhere other than to test and see if we are supporting
an offloaded macvlan netdev, and it is much easier to test netdev for not
being ixgbe based to verify that.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe.h      |    1 -
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   23 +++++++++++++----------
 2 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
index 7a421b70afce..09def116bb48 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
@@ -332,7 +332,6 @@ struct ixgbe_ring {
 	struct net_device *netdev;	/* netdev ring belongs to */
 	struct bpf_prog *xdp_prog;
 	struct device *dev;		/* device for DMA mapping */
-	struct ixgbe_fwd_adapter *l2_accel_priv;
 	void *desc;			/* descriptor ring memory */
 	union {
 		struct ixgbe_tx_buffer *tx_buffer_info;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 69bababc0cf6..09754519a0d9 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -192,6 +192,13 @@ static int ixgbe_notify_dca(struct notifier_block *, unsigned long event,
 static bool ixgbe_check_cfg_remove(struct ixgbe_hw *hw, struct pci_dev *pdev);
 static void ixgbe_watchdog_link_is_down(struct ixgbe_adapter *);
 
+static const struct net_device_ops ixgbe_netdev_ops;
+
+static bool netif_is_ixgbe(struct net_device *dev)
+{
+	return dev && (dev->netdev_ops == &ixgbe_netdev_ops);
+}
+
 static int ixgbe_read_pci_cfg_word_parent(struct ixgbe_adapter *adapter,
 					  u32 reg, u16 *value)
 {
@@ -4479,8 +4486,9 @@ static void ixgbe_vlan_strip_disable(struct ixgbe_adapter *adapter)
 		for (i = 0; i < adapter->num_rx_queues; i++) {
 			struct ixgbe_ring *ring = adapter->rx_ring[i];
 
-			if (ring->l2_accel_priv)
+			if (!netif_is_ixgbe(ring->netdev))
 				continue;
+
 			j = ring->reg_idx;
 			vlnctrl = IXGBE_READ_REG(hw, IXGBE_RXDCTL(j));
 			vlnctrl &= ~IXGBE_RXDCTL_VME;
@@ -4516,8 +4524,9 @@ static void ixgbe_vlan_strip_enable(struct ixgbe_adapter *adapter)
 		for (i = 0; i < adapter->num_rx_queues; i++) {
 			struct ixgbe_ring *ring = adapter->rx_ring[i];
 
-			if (ring->l2_accel_priv)
+			if (!netif_is_ixgbe(ring->netdev))
 				continue;
+
 			j = ring->reg_idx;
 			vlnctrl = IXGBE_READ_REG(hw, IXGBE_RXDCTL(j));
 			vlnctrl |= IXGBE_RXDCTL_VME;
@@ -5331,7 +5340,6 @@ static void ixgbe_disable_fwd_ring(struct ixgbe_fwd_adapter *vadapter,
 	usleep_range(10000, 20000);
 	ixgbe_irq_disable_queues(adapter, BIT_ULL(index));
 	ixgbe_clean_rx_ring(rx_ring);
-	rx_ring->l2_accel_priv = NULL;
 }
 
 static int ixgbe_fwd_ring_down(struct net_device *vdev,
@@ -5349,10 +5357,8 @@ static int ixgbe_fwd_ring_down(struct net_device *vdev,
 		adapter->rx_ring[rxbase + i]->netdev = adapter->netdev;
 	}
 
-	for (i = 0; i < adapter->num_rx_queues_per_pool; i++) {
-		adapter->tx_ring[txbase + i]->l2_accel_priv = NULL;
+	for (i = 0; i < adapter->num_rx_queues_per_pool; i++)
 		adapter->tx_ring[txbase + i]->netdev = adapter->netdev;
-	}
 
 
 	return 0;
@@ -5382,14 +5388,11 @@ static int ixgbe_fwd_ring_up(struct net_device *vdev,
 
 	for (i = 0; i < adapter->num_rx_queues_per_pool; i++) {
 		adapter->rx_ring[rxbase + i]->netdev = vdev;
-		adapter->rx_ring[rxbase + i]->l2_accel_priv = accel;
 		ixgbe_configure_rx_ring(adapter, adapter->rx_ring[rxbase + i]);
 	}
 
-	for (i = 0; i < adapter->num_rx_queues_per_pool; i++) {
+	for (i = 0; i < adapter->num_rx_queues_per_pool; i++)
 		adapter->tx_ring[txbase + i]->netdev = vdev;
-		adapter->tx_ring[txbase + i]->l2_accel_priv = accel;
-	}
 
 	queues = min_t(unsigned int,
 		       adapter->num_rx_queues_per_pool, vdev->num_tx_queues);


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 08/16] ixgbe: Assume provided MAC filter has been verified by macvlan
  2017-11-22 18:56 [Intel-wired-lan] [jkirsher/next-queue PATCH 00/16] ixgbe/fm10k: macvlan fixes Alexander Duyck
                   ` (6 preceding siblings ...)
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 07/16] ixgbe: Drop l2_accel_priv data pointer from ring struct Alexander Duyck
@ 2017-11-22 18:56 ` Alexander Duyck
  2017-11-29 17:06   ` Bowers, AndrewX
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 09/16] ixgbe: Default to 1 pool always being allocated Alexander Duyck
                   ` (7 subsequent siblings)
  15 siblings, 1 reply; 33+ messages in thread
From: Alexander Duyck @ 2017-11-22 18:56 UTC (permalink / raw)
  To: intel-wired-lan

From: Alexander Duyck <alexander.h.duyck@intel.com>

The macvlan driver itself will validate the MAC address that is configured
for a given interface. There is no need for us to verify it again.

Instead we should be checking to verify that we actuall allocate the filter
and have not run out of resources to configure a MAC rule in our filter
table.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 09754519a0d9..6b553f96ead9 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -5404,12 +5404,16 @@ static int ixgbe_fwd_ring_up(struct net_device *vdev,
 	if (err)
 		goto fwd_queue_err;
 
-	if (is_valid_ether_addr(vdev->dev_addr))
-		ixgbe_add_mac_filter(adapter, vdev->dev_addr,
-				     VMDQ_P(accel->pool));
+	/* ixgbe_add_mac_filter will return an index if it succeeds, so we
+	 * need to only treat it as an error value if it is negative.
+	 */
+	err = ixgbe_add_mac_filter(adapter, vdev->dev_addr,
+				   VMDQ_P(accel->pool));
+	if (err < 0)
+		goto fwd_queue_err;
 
 	ixgbe_macvlan_set_rx_mode(vdev, VMDQ_P(accel->pool), adapter);
-	return err;
+	return 0;
 fwd_queue_err:
 	ixgbe_fwd_ring_down(vdev, accel);
 	return err;


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 09/16] ixgbe: Default to 1 pool always being allocated
  2017-11-22 18:56 [Intel-wired-lan] [jkirsher/next-queue PATCH 00/16] ixgbe/fm10k: macvlan fixes Alexander Duyck
                   ` (7 preceding siblings ...)
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 08/16] ixgbe: Assume provided MAC filter has been verified by macvlan Alexander Duyck
@ 2017-11-22 18:57 ` Alexander Duyck
  2017-11-29 17:07   ` Bowers, AndrewX
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 10/16] ixgbe: Don't assume dev->num_tc is equal to hardware TC config Alexander Duyck
                   ` (6 subsequent siblings)
  15 siblings, 1 reply; 33+ messages in thread
From: Alexander Duyck @ 2017-11-22 18:57 UTC (permalink / raw)
  To: intel-wired-lan

From: Alexander Duyck <alexander.h.duyck@intel.com>

We might as well configure the limit to default to 1 pool always for the
interface. This accounts for the fact that the PF counts as 1 pool if
SR-IOV is enabled, and in general we are always running in 1 pool mode when
RSS or DCB is enabled as well, though we don't need to actually evaulate
any of the VMDq features in those cases.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c  |    1 +
 drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c |    7 ++-----
 2 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 6b553f96ead9..611e6980b5ac 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -6127,6 +6127,7 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter,
 	fdir = min_t(int, IXGBE_MAX_FDIR_INDICES, num_online_cpus());
 	adapter->ring_feature[RING_F_FDIR].limit = fdir;
 	adapter->fdir_pballoc = IXGBE_FDIR_PBALLOC_64K;
+	adapter->ring_feature[RING_F_VMDQ].limit = 1;
 #ifdef CONFIG_IXGBE_DCA
 	adapter->flags |= IXGBE_FLAG_DCA_CAPABLE;
 #endif
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
index 0085f4632966..543f2e60e4b7 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
@@ -78,12 +78,9 @@ static int __ixgbe_enable_sriov(struct ixgbe_adapter *adapter,
 	struct ixgbe_hw *hw = &adapter->hw;
 	int i;
 
-	adapter->flags |= IXGBE_FLAG_SRIOV_ENABLED;
-
 	/* Enable VMDq flag so device will be set in VM mode */
-	adapter->flags |= IXGBE_FLAG_VMDQ_ENABLED;
-	if (!adapter->ring_feature[RING_F_VMDQ].limit)
-		adapter->ring_feature[RING_F_VMDQ].limit = 1;
+	adapter->flags |= IXGBE_FLAG_SRIOV_ENABLED |
+			  IXGBE_FLAG_VMDQ_ENABLED;
 
 	/* Allocate memory for per VF control structures */
 	adapter->vfinfo = kcalloc(num_vfs, sizeof(struct vf_data_storage),


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 10/16] ixgbe: Don't assume dev->num_tc is equal to hardware TC config
  2017-11-22 18:56 [Intel-wired-lan] [jkirsher/next-queue PATCH 00/16] ixgbe/fm10k: macvlan fixes Alexander Duyck
                   ` (8 preceding siblings ...)
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 09/16] ixgbe: Default to 1 pool always being allocated Alexander Duyck
@ 2017-11-22 18:57 ` Alexander Duyck
  2017-11-29 17:08   ` Bowers, AndrewX
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 11/16] ixgbe/fm10k: Record macvlan stats instead of Rx queue for macvlan offloaded rings Alexander Duyck
                   ` (5 subsequent siblings)
  15 siblings, 1 reply; 33+ messages in thread
From: Alexander Duyck @ 2017-11-22 18:57 UTC (permalink / raw)
  To: intel-wired-lan

From: Alexander Duyck <alexander.h.duyck@intel.com>

The code throughout ixgbe was assuming that dev->num_tc was populated and
configured with the driver, when in fact this can be configured via mqprio
without any hardware coordination other than restricting us to the real
number of Tx queues we advertise.

Instead of handling things this way we need to keep a local copy of the
number of TCs in use so that we don't accidently pull in the TC
configuration from mqprio when it is configured in software mode.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe.h         |    1 +
 drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c  |    2 +-
 drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c |    6 +++---
 drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c     |   17 ++++++++---------
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c    |   22 ++++++++++++----------
 drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c   |    8 ++++----
 6 files changed, 29 insertions(+), 27 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
index 09def116bb48..5891731984b1 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
@@ -672,6 +672,7 @@ struct ixgbe_adapter {
 	struct ieee_ets *ixgbe_ieee_ets;
 	struct ixgbe_dcb_config dcb_cfg;
 	struct ixgbe_dcb_config temp_dcb_cfg;
+	u8 hw_tcs;
 	u8 dcb_set_bitmap;
 	u8 dcbx_cap;
 	enum ixgbe_fc_mode last_lfc_mode;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c
index 78c52375acc6..b33f3f87e4b1 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c
@@ -571,7 +571,7 @@ static int ixgbe_dcbnl_ieee_setets(struct net_device *dev,
 	if (max_tc > adapter->dcb_cfg.num_tcs.pg_tcs)
 		return -EINVAL;
 
-	if (max_tc != netdev_get_num_tc(dev)) {
+	if (max_tc != adapter->hw_tcs) {
 		err = ixgbe_setup_tc(dev, max_tc);
 		if (err)
 			return err;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
index 7634b1955863..539853aea8d2 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
@@ -3113,7 +3113,7 @@ static int ixgbe_get_ts_info(struct net_device *dev,
 static unsigned int ixgbe_max_channels(struct ixgbe_adapter *adapter)
 {
 	unsigned int max_combined;
-	u8 tcs = netdev_get_num_tc(adapter->netdev);
+	u8 tcs = adapter->hw_tcs;
 
 	if (!(adapter->flags & IXGBE_FLAG_MSIX_ENABLED)) {
 		/* We only support one q_vector without MSI-X */
@@ -3170,7 +3170,7 @@ static void ixgbe_get_channels(struct net_device *dev,
 		return;
 
 	/* same thing goes for being DCB enabled */
-	if (netdev_get_num_tc(dev) > 1)
+	if (adapter->hw_tcs > 1)
 		return;
 
 	/* if ATR is disabled we can exit */
@@ -3216,7 +3216,7 @@ static int ixgbe_set_channels(struct net_device *dev,
 
 #endif
 	/* use setup TC to update any traffic class queue mapping */
-	return ixgbe_setup_tc(dev, netdev_get_num_tc(dev));
+	return ixgbe_setup_tc(dev, adapter->hw_tcs);
 }
 
 static int ixgbe_get_module_info(struct net_device *dev,
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index cceafbc3f1db..df23a57ddb56 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -47,7 +47,7 @@ static bool ixgbe_cache_ring_dcb_sriov(struct ixgbe_adapter *adapter)
 	struct ixgbe_ring_feature *vmdq = &adapter->ring_feature[RING_F_VMDQ];
 	int i;
 	u16 reg_idx;
-	u8 tcs = netdev_get_num_tc(adapter->netdev);
+	u8 tcs = adapter->hw_tcs;
 
 	/* verify we have DCB queueing enabled before proceeding */
 	if (tcs <= 1)
@@ -111,9 +111,8 @@ static bool ixgbe_cache_ring_dcb_sriov(struct ixgbe_adapter *adapter)
 static void ixgbe_get_first_reg_idx(struct ixgbe_adapter *adapter, u8 tc,
 				    unsigned int *tx, unsigned int *rx)
 {
-	struct net_device *dev = adapter->netdev;
 	struct ixgbe_hw *hw = &adapter->hw;
-	u8 num_tcs = netdev_get_num_tc(dev);
+	u8 num_tcs = adapter->hw_tcs;
 
 	*tx = 0;
 	*rx = 0;
@@ -168,10 +167,9 @@ static void ixgbe_get_first_reg_idx(struct ixgbe_adapter *adapter, u8 tc,
  **/
 static bool ixgbe_cache_ring_dcb(struct ixgbe_adapter *adapter)
 {
-	struct net_device *dev = adapter->netdev;
+	u8 num_tcs = adapter->hw_tcs;
 	unsigned int tx_idx, rx_idx;
 	int tc, offset, rss_i, i;
-	u8 num_tcs = netdev_get_num_tc(dev);
 
 	/* verify we have DCB queueing enabled before proceeding */
 	if (num_tcs <= 1)
@@ -340,7 +338,7 @@ static bool ixgbe_set_dcb_sriov_queues(struct ixgbe_adapter *adapter)
 #ifdef IXGBE_FCOE
 	u16 fcoe_i = 0;
 #endif
-	u8 tcs = netdev_get_num_tc(adapter->netdev);
+	u8 tcs = adapter->hw_tcs;
 
 	/* verify we have DCB queueing enabled before proceeding */
 	if (tcs <= 1)
@@ -440,7 +438,7 @@ static bool ixgbe_set_dcb_queues(struct ixgbe_adapter *adapter)
 	int tcs;
 
 	/* Map queue offset and counts onto allocated tx queues */
-	tcs = netdev_get_num_tc(dev);
+	tcs = adapter->hw_tcs;
 
 	/* verify we have DCB queueing enabled before proceeding */
 	if (tcs <= 1)
@@ -839,7 +837,7 @@ static int ixgbe_alloc_q_vector(struct ixgbe_adapter *adapter,
 	int node = NUMA_NO_NODE;
 	int cpu = -1;
 	int ring_count, size;
-	u8 tcs = netdev_get_num_tc(adapter->netdev);
+	u8 tcs = adapter->hw_tcs;
 
 	ring_count = txr_count + rxr_count + xdp_count;
 	size = sizeof(struct ixgbe_q_vector) +
@@ -1176,7 +1174,7 @@ static void ixgbe_set_interrupt_capability(struct ixgbe_adapter *adapter)
 	 */
 
 	/* Disable DCB unless we only have a single traffic class */
-	if (netdev_get_num_tc(adapter->netdev) > 1) {
+	if (adapter->hw_tcs > 1) {
 		e_dev_warn("Number of DCB TCs exceeds number of available queues. Disabling DCB support.\n");
 		netdev_reset_tc(adapter->netdev);
 
@@ -1188,6 +1186,7 @@ static void ixgbe_set_interrupt_capability(struct ixgbe_adapter *adapter)
 		adapter->dcb_cfg.pfc_mode_enable = false;
 	}
 
+	adapter->hw_tcs = 0;
 	adapter->dcb_cfg.num_tcs.pg_tcs = 1;
 	adapter->dcb_cfg.num_tcs.pfc_tcs = 1;
 
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 611e6980b5ac..090d73df46cc 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -3572,7 +3572,7 @@ static void ixgbe_setup_mtqc(struct ixgbe_adapter *adapter)
 {
 	struct ixgbe_hw *hw = &adapter->hw;
 	u32 rttdcs, mtqc;
-	u8 tcs = netdev_get_num_tc(adapter->netdev);
+	u8 tcs = adapter->hw_tcs;
 
 	if (hw->mac.type == ixgbe_mac_82598EB)
 		return;
@@ -3927,7 +3927,7 @@ static void ixgbe_setup_mrqc(struct ixgbe_adapter *adapter)
 		if (adapter->ring_feature[RING_F_RSS].mask)
 			mrqc = IXGBE_MRQC_RSSEN;
 	} else {
-		u8 tcs = netdev_get_num_tc(adapter->netdev);
+		u8 tcs = adapter->hw_tcs;
 
 		if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) {
 			if (tcs > 4)
@@ -5195,7 +5195,7 @@ static int ixgbe_lpbthresh(struct ixgbe_adapter *adapter, int pb)
 static void ixgbe_pbthresh_setup(struct ixgbe_adapter *adapter)
 {
 	struct ixgbe_hw *hw = &adapter->hw;
-	int num_tc = netdev_get_num_tc(adapter->netdev);
+	int num_tc = adapter->hw_tcs;
 	int i;
 
 	if (!num_tc)
@@ -5218,7 +5218,7 @@ static void ixgbe_configure_pb(struct ixgbe_adapter *adapter)
 {
 	struct ixgbe_hw *hw = &adapter->hw;
 	int hdrm;
-	u8 tc = netdev_get_num_tc(adapter->netdev);
+	u8 tc = adapter->hw_tcs;
 
 	if (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE ||
 	    adapter->flags & IXGBE_FLAG_FDIR_PERFECT_CAPABLE)
@@ -8867,6 +8867,7 @@ int ixgbe_setup_tc(struct net_device *dev, u8 tc)
 		netdev_set_num_tc(dev, tc);
 		ixgbe_set_prio_tc_map(adapter);
 
+		adapter->hw_tcs = tc;
 		adapter->flags |= IXGBE_FLAG_DCB_ENABLED;
 
 		if (adapter->hw.mac.type == ixgbe_mac_82598EB) {
@@ -8880,6 +8881,7 @@ int ixgbe_setup_tc(struct net_device *dev, u8 tc)
 			adapter->hw.fc.requested_mode = adapter->last_lfc_mode;
 
 		adapter->flags &= ~IXGBE_FLAG_DCB_ENABLED;
+		adapter->hw_tcs = tc;
 
 		adapter->temp_dcb_cfg.pfc_mode_enable = false;
 		adapter->dcb_cfg.pfc_mode_enable = false;
@@ -9410,7 +9412,7 @@ void ixgbe_sriov_reinit(struct ixgbe_adapter *adapter)
 	struct net_device *netdev = adapter->netdev;
 
 	rtnl_lock();
-	ixgbe_setup_tc(netdev, netdev_get_num_tc(netdev));
+	ixgbe_setup_tc(netdev, adapter->hw_tcs);
 	rtnl_unlock();
 }
 
@@ -9486,7 +9488,7 @@ static int ixgbe_set_features(struct net_device *netdev,
 		/* We cannot enable ATR if SR-IOV is enabled */
 		if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED ||
 		    /* We cannot enable ATR if we have 2 or more tcs */
-		    (netdev_get_num_tc(netdev) > 1) ||
+		    (adapter->hw_tcs > 1) ||
 		    /* We cannot enable ATR if RSS is disabled */
 		    (adapter->ring_feature[RING_F_RSS].limit <= 1) ||
 		    /* A sample rate of 0 indicates ATR disabled */
@@ -9787,7 +9789,7 @@ static void *ixgbe_fwd_add(struct net_device *pdev, struct net_device *vdev)
 	struct ixgbe_fwd_adapter *fwd_adapter = NULL;
 	struct ixgbe_adapter *adapter = netdev_priv(pdev);
 	int used_pools = adapter->num_vfs + adapter->num_rx_pools;
-	int tcs = netdev_get_num_tc(pdev) ? : 1;
+	int tcs = adapter->hw_tcs ? : 1;
 	unsigned int limit;
 	int pool, err;
 
@@ -9833,7 +9835,7 @@ static void *ixgbe_fwd_add(struct net_device *pdev, struct net_device *vdev)
 	adapter->ring_feature[RING_F_RSS].limit = vdev->num_tx_queues;
 
 	/* Force reinit of ring allocation with VMDQ enabled */
-	err = ixgbe_setup_tc(pdev, netdev_get_num_tc(pdev));
+	err = ixgbe_setup_tc(pdev, adapter->hw_tcs);
 	if (err)
 		goto fwd_add_err;
 	fwd_adapter->pool = pool;
@@ -9878,7 +9880,7 @@ static void ixgbe_fwd_del(struct net_device *pdev, void *priv)
 		adapter->ring_feature[RING_F_RSS].limit = rss;
 	}
 
-	ixgbe_setup_tc(pdev, netdev_get_num_tc(pdev));
+	ixgbe_setup_tc(pdev, adapter->hw_tcs);
 	netdev_dbg(pdev, "pool %i:%i queues %i:%i\n",
 		   fwd_adapter->pool, adapter->num_rx_pools,
 		   fwd_adapter->rx_base_queue,
@@ -9951,7 +9953,7 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog)
 
 	/* If transitioning XDP modes reconfigure rings */
 	if (!!prog != !!old_prog) {
-		int err = ixgbe_setup_tc(dev, netdev_get_num_tc(dev));
+		int err = ixgbe_setup_tc(dev, adapter->hw_tcs);
 
 		if (err) {
 			rcu_assign_pointer(adapter->xdp_prog, old_prog);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
index 543f2e60e4b7..27a70a52f3c9 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
@@ -311,7 +311,7 @@ static int ixgbe_pci_sriov_enable(struct pci_dev *dev, int num_vfs)
 	 * than we have available pools. The PCI bus driver already checks for
 	 * other values out of range.
 	 */
-	num_tc = netdev_get_num_tc(adapter->netdev);
+	num_tc = adapter->hw_tcs;
 	num_rx_pools = adapter->num_rx_pools;
 	limit = (num_tc > 4) ? IXGBE_MAX_VFS_8TC :
 		(num_tc > 1) ? IXGBE_MAX_VFS_4TC : IXGBE_MAX_VFS_1TC;
@@ -713,7 +713,7 @@ static inline void ixgbe_vf_reset_event(struct ixgbe_adapter *adapter, u32 vf)
 {
 	struct ixgbe_hw *hw = &adapter->hw;
 	struct vf_data_storage *vfinfo = &adapter->vfinfo[vf];
-	u8 num_tcs = netdev_get_num_tc(adapter->netdev);
+	u8 num_tcs = adapter->hw_tcs;
 
 	/* remove VLAN filters beloning to this VF */
 	ixgbe_clear_vf_vlans(adapter, vf);
@@ -921,7 +921,7 @@ static int ixgbe_set_vf_vlan_msg(struct ixgbe_adapter *adapter,
 {
 	u32 add = (msgbuf[0] & IXGBE_VT_MSGINFO_MASK) >> IXGBE_VT_MSGINFO_SHIFT;
 	u32 vid = (msgbuf[1] & IXGBE_VLVF_VLANID_MASK);
-	u8 tcs = netdev_get_num_tc(adapter->netdev);
+	u8 tcs = adapter->hw_tcs;
 
 	if (adapter->vfinfo[vf].pf_vlan || tcs) {
 		e_warn(drv,
@@ -1009,7 +1009,7 @@ static int ixgbe_get_vf_queues(struct ixgbe_adapter *adapter,
 	struct net_device *dev = adapter->netdev;
 	struct ixgbe_ring_feature *vmdq = &adapter->ring_feature[RING_F_VMDQ];
 	unsigned int default_tc = 0;
-	u8 num_tcs = netdev_get_num_tc(dev);
+	u8 num_tcs = adapter->hw_tcs;
 
 	/* verify the PF is supporting the correct APIs */
 	switch (adapter->vfinfo[vf].vf_api) {


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 11/16] ixgbe/fm10k: Record macvlan stats instead of Rx queue for macvlan offloaded rings
  2017-11-22 18:56 [Intel-wired-lan] [jkirsher/next-queue PATCH 00/16] ixgbe/fm10k: macvlan fixes Alexander Duyck
                   ` (9 preceding siblings ...)
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 10/16] ixgbe: Don't assume dev->num_tc is equal to hardware TC config Alexander Duyck
@ 2017-11-22 18:57 ` Alexander Duyck
  2017-11-29 17:08   ` Bowers, AndrewX
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 12/16] ixgbe: Do not manipulate macvlan Tx queues when performing macvlan offload Alexander Duyck
                   ` (4 subsequent siblings)
  15 siblings, 1 reply; 33+ messages in thread
From: Alexander Duyck @ 2017-11-22 18:57 UTC (permalink / raw)
  To: intel-wired-lan

From: Alexander Duyck <alexander.h.duyck@intel.com>

We shouldn't be recording the Rx queue on macvlan offloaded frames since
the macvlan is normally brought up as a single queue device, and it will
trigger warnings for RPS if we have recorded queue IDs larger than the
"real_num_rx_queues" value recorded for the device.

Instead we should be recording the macvlan statistics since we are
bypassing the normal macvlan statistics that would have been generated by
the receive path.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
 drivers/net/ethernet/intel/fm10k/fm10k_main.c |   14 ++++++--------
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   10 ++++++++--
 2 files changed, 14 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_main.c b/drivers/net/ethernet/intel/fm10k/fm10k_main.c
index dbd69310f263..9abd7fff91f3 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_main.c
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_main.c
@@ -446,13 +446,13 @@ static void fm10k_type_trans(struct fm10k_ring *rx_ring,
 
 	skb->protocol = eth_type_trans(skb, dev);
 
+	/* Record Rx queue, or update macvlan statistics */
 	if (!l2_accel)
-		return;
-
-	/* update MACVLAN statistics */
-	macvlan_count_rx(netdev_priv(dev), skb->len + ETH_HLEN, 1,
-			 !!(rx_desc->w.hdr_info &
-			    cpu_to_le16(FM10K_RXD_HDR_INFO_XC_MASK)));
+		skb_record_rx_queue(skb, rx_ring->queue_index);
+	else
+		macvlan_count_rx(netdev_priv(dev), skb->len + ETH_HLEN, true,
+				 (skb->pkt_type == PACKET_BROADCAST) ||
+				 (skb->pkt_type == PACKET_MULTICAST));
 }
 
 /**
@@ -479,8 +479,6 @@ static unsigned int fm10k_process_skb_fields(struct fm10k_ring *rx_ring,
 
 	FM10K_CB(skb)->fi.w.vlan = rx_desc->w.vlan;
 
-	skb_record_rx_queue(skb, rx_ring->queue_index);
-
 	FM10K_CB(skb)->fi.d.glort = rx_desc->d.glort;
 
 	if (rx_desc->w.vlan) {
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 090d73df46cc..825d093ebf2e 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -1749,9 +1749,15 @@ static void ixgbe_process_skb_fields(struct ixgbe_ring *rx_ring,
 		__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vid);
 	}
 
-	skb_record_rx_queue(skb, rx_ring->queue_index);
-
 	skb->protocol = eth_type_trans(skb, dev);
+
+	/* record Rx queue, or update MACVLAN statistics */
+	if (netif_is_ixgbe(dev))
+		skb_record_rx_queue(skb, rx_ring->queue_index);
+	else
+		macvlan_count_rx(netdev_priv(dev), skb->len + ETH_HLEN, true,
+				 (skb->pkt_type == PACKET_BROADCAST) ||
+				 (skb->pkt_type == PACKET_MULTICAST));
 }
 
 static void ixgbe_rx_skb(struct ixgbe_q_vector *q_vector,


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 12/16] ixgbe: Do not manipulate macvlan Tx queues when performing macvlan offload
  2017-11-22 18:56 [Intel-wired-lan] [jkirsher/next-queue PATCH 00/16] ixgbe/fm10k: macvlan fixes Alexander Duyck
                   ` (10 preceding siblings ...)
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 11/16] ixgbe/fm10k: Record macvlan stats instead of Rx queue for macvlan offloaded rings Alexander Duyck
@ 2017-11-22 18:57 ` Alexander Duyck
  2017-11-29 17:09   ` Bowers, AndrewX
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 13/16] ixgbe: avoid bringing rings up/down as macvlans are added/removed Alexander Duyck
                   ` (3 subsequent siblings)
  15 siblings, 1 reply; 33+ messages in thread
From: Alexander Duyck @ 2017-11-22 18:57 UTC (permalink / raw)
  To: intel-wired-lan

From: Alexander Duyck <alexander.h.duyck@intel.com>

We should not be stopping/starting the upper devices Tx queues when
handling a macvlan offload. Instead we should be stopping and starting
traffic on our own queues.

In order to prevent us from doing this I am updating the code so that we no
longer change the queue configuration on the upper device, nor do we update
the queue_index on our own device. Instead we can just use the queue index
for our local device and not update the netdev in the case of the transmit
rings.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  |   12 --
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |  121 +++++--------------------
 2 files changed, 25 insertions(+), 108 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index df23a57ddb56..dc7f3ef2957b 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -920,11 +920,7 @@ static int ixgbe_alloc_q_vector(struct ixgbe_adapter *adapter,
 
 		/* apply Tx specific ring traits */
 		ring->count = adapter->tx_ring_count;
-		if (adapter->num_rx_pools > 1)
-			ring->queue_index =
-				txr_idx % adapter->num_rx_queues_per_pool;
-		else
-			ring->queue_index = txr_idx;
+		ring->queue_index = txr_idx;
 
 		/* assign ring to adapter */
 		adapter->tx_ring[txr_idx] = ring;
@@ -994,11 +990,7 @@ static int ixgbe_alloc_q_vector(struct ixgbe_adapter *adapter,
 #endif /* IXGBE_FCOE */
 		/* apply Rx specific ring traits */
 		ring->count = adapter->rx_ring_count;
-		if (adapter->num_rx_pools > 1)
-			ring->queue_index =
-				rxr_idx % adapter->num_rx_queues_per_pool;
-		else
-			ring->queue_index = rxr_idx;
+		ring->queue_index = rxr_idx;
 
 		/* assign ring to adapter */
 		adapter->rx_ring[rxr_idx] = ring;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 825d093ebf2e..c21fd98bd45a 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -5339,12 +5339,11 @@ static void ixgbe_disable_fwd_ring(struct ixgbe_fwd_adapter *vadapter,
 				   struct ixgbe_ring *rx_ring)
 {
 	struct ixgbe_adapter *adapter = vadapter->real_adapter;
-	int index = rx_ring->queue_index + vadapter->rx_base_queue;
 
 	/* shutdown specific queue receive and wait for dma to settle */
 	ixgbe_disable_rx_queue(adapter, rx_ring);
 	usleep_range(10000, 20000);
-	ixgbe_irq_disable_queues(adapter, BIT_ULL(index));
+	ixgbe_irq_disable_queues(adapter, BIT_ULL(rx_ring->queue_index));
 	ixgbe_clean_rx_ring(rx_ring);
 }
 
@@ -5353,20 +5352,13 @@ static int ixgbe_fwd_ring_down(struct net_device *vdev,
 {
 	struct ixgbe_adapter *adapter = accel->real_adapter;
 	unsigned int rxbase = accel->rx_base_queue;
-	unsigned int txbase = accel->tx_base_queue;
 	int i;
 
-	netif_tx_stop_all_queues(vdev);
-
 	for (i = 0; i < adapter->num_rx_queues_per_pool; i++) {
 		ixgbe_disable_fwd_ring(accel, adapter->rx_ring[rxbase + i]);
 		adapter->rx_ring[rxbase + i]->netdev = adapter->netdev;
 	}
 
-	for (i = 0; i < adapter->num_rx_queues_per_pool; i++)
-		adapter->tx_ring[txbase + i]->netdev = adapter->netdev;
-
-
 	return 0;
 }
 
@@ -5374,8 +5366,7 @@ static int ixgbe_fwd_ring_up(struct net_device *vdev,
 			     struct ixgbe_fwd_adapter *accel)
 {
 	struct ixgbe_adapter *adapter = accel->real_adapter;
-	unsigned int rxbase, txbase, queues;
-	int i, baseq, err = 0;
+	int i, baseq, err;
 
 	if (!test_bit(accel->pool, adapter->fwd_bitmask))
 		return 0;
@@ -5386,30 +5377,17 @@ static int ixgbe_fwd_ring_up(struct net_device *vdev,
 		   baseq, baseq + adapter->num_rx_queues_per_pool);
 
 	accel->netdev = vdev;
-	accel->rx_base_queue = rxbase = baseq;
-	accel->tx_base_queue = txbase = baseq;
+	accel->rx_base_queue = baseq;
+	accel->tx_base_queue = baseq;
 
 	for (i = 0; i < adapter->num_rx_queues_per_pool; i++)
-		ixgbe_disable_fwd_ring(accel, adapter->rx_ring[rxbase + i]);
+		ixgbe_disable_fwd_ring(accel, adapter->rx_ring[baseq + i]);
 
 	for (i = 0; i < adapter->num_rx_queues_per_pool; i++) {
-		adapter->rx_ring[rxbase + i]->netdev = vdev;
-		ixgbe_configure_rx_ring(adapter, adapter->rx_ring[rxbase + i]);
+		adapter->rx_ring[baseq + i]->netdev = vdev;
+		ixgbe_configure_rx_ring(adapter, adapter->rx_ring[baseq + i]);
 	}
 
-	for (i = 0; i < adapter->num_rx_queues_per_pool; i++)
-		adapter->tx_ring[txbase + i]->netdev = vdev;
-
-	queues = min_t(unsigned int,
-		       adapter->num_rx_queues_per_pool, vdev->num_tx_queues);
-	err = netif_set_real_num_tx_queues(vdev, queues);
-	if (err)
-		goto fwd_queue_err;
-
-	err = netif_set_real_num_rx_queues(vdev, queues);
-	if (err)
-		goto fwd_queue_err;
-
 	/* ixgbe_add_mac_filter will return an index if it succeeds, so we
 	 * need to only treat it as an error value if it is negative.
 	 */
@@ -5897,21 +5875,6 @@ static void ixgbe_fdir_filter_exit(struct ixgbe_adapter *adapter)
 	spin_unlock(&adapter->fdir_perfect_lock);
 }
 
-static int ixgbe_disable_macvlan(struct net_device *upper, void *data)
-{
-	if (netif_is_macvlan(upper)) {
-		struct macvlan_dev *vlan = netdev_priv(upper);
-
-		if (vlan->fwd_priv) {
-			netif_tx_stop_all_queues(upper);
-			netif_carrier_off(upper);
-			netif_tx_disable(upper);
-		}
-	}
-
-	return 0;
-}
-
 void ixgbe_down(struct ixgbe_adapter *adapter)
 {
 	struct net_device *netdev = adapter->netdev;
@@ -5941,10 +5904,6 @@ void ixgbe_down(struct ixgbe_adapter *adapter)
 	netif_carrier_off(netdev);
 	netif_tx_disable(netdev);
 
-	/* disable any upper devices */
-	netdev_walk_all_upper_dev_rcu(adapter->netdev,
-				      ixgbe_disable_macvlan, NULL);
-
 	ixgbe_irq_disable(adapter);
 
 	ixgbe_napi_disable_all(adapter);
@@ -7254,18 +7213,6 @@ static void ixgbe_update_default_up(struct ixgbe_adapter *adapter)
 #endif
 }
 
-static int ixgbe_enable_macvlan(struct net_device *upper, void *data)
-{
-	if (netif_is_macvlan(upper)) {
-		struct macvlan_dev *vlan = netdev_priv(upper);
-
-		if (vlan->fwd_priv)
-			netif_tx_wake_all_queues(upper);
-	}
-
-	return 0;
-}
-
 /**
  * ixgbe_watchdog_link_is_up - update netif_carrier status and
  *                             print link up message
@@ -7346,12 +7293,6 @@ static void ixgbe_watchdog_link_is_up(struct ixgbe_adapter *adapter)
 	/* enable transmits */
 	netif_tx_wake_all_queues(adapter->netdev);
 
-	/* enable any upper devices */
-	rtnl_lock();
-	netdev_walk_all_upper_dev_rcu(adapter->netdev,
-				      ixgbe_enable_macvlan, NULL);
-	rtnl_unlock();
-
 	/* update the default user priority for VFs */
 	ixgbe_update_default_up(adapter);
 
@@ -8312,14 +8253,19 @@ static u16 ixgbe_select_queue(struct net_device *dev, struct sk_buff *skb,
 			      void *accel_priv, select_queue_fallback_t fallback)
 {
 	struct ixgbe_fwd_adapter *fwd_adapter = accel_priv;
-#ifdef IXGBE_FCOE
 	struct ixgbe_adapter *adapter;
-	struct ixgbe_ring_feature *f;
 	int txq;
+#ifdef IXGBE_FCOE
+	struct ixgbe_ring_feature *f;
 #endif
 
-	if (fwd_adapter)
-		return skb->queue_mapping + fwd_adapter->tx_base_queue;
+	if (fwd_adapter) {
+		adapter = netdev_priv(dev);
+		txq = reciprocal_scale(skb_get_hash(skb),
+				       adapter->num_rx_queues_per_pool);
+
+		return txq + fwd_adapter->tx_base_queue;
+	}
 
 #ifdef IXGBE_FCOE
 
@@ -9806,22 +9752,6 @@ static void *ixgbe_fwd_add(struct net_device *pdev, struct net_device *vdev)
 	if (used_pools >= IXGBE_MAX_VF_FUNCTIONS)
 		return ERR_PTR(-EINVAL);
 
-#ifdef CONFIG_RPS
-	if (vdev->num_rx_queues != vdev->num_tx_queues) {
-		netdev_info(pdev, "%s: Only supports a single queue count for TX and RX\n",
-			    vdev->name);
-		return ERR_PTR(-EINVAL);
-	}
-#endif
-	/* Check for hardware restriction on number of rx/tx queues */
-	if (vdev->num_tx_queues > IXGBE_MAX_L2A_QUEUES ||
-	    vdev->num_tx_queues == IXGBE_BAD_L2A_QUEUE) {
-		netdev_info(pdev,
-			    "%s: Supports RX/TX Queue counts 1,2, and 4\n",
-			    pdev->name);
-		return ERR_PTR(-EINVAL);
-	}
-
 	if (((adapter->flags & IXGBE_FLAG_DCB_ENABLED) &&
 	      adapter->num_rx_pools >= (MAX_TX_QUEUES / tcs)) ||
 	    (adapter->num_rx_pools > IXGBE_MAX_MACVLANS))
@@ -9838,24 +9768,19 @@ static void *ixgbe_fwd_add(struct net_device *pdev, struct net_device *vdev)
 	/* Enable VMDq flag so device will be set in VM mode */
 	adapter->flags |= IXGBE_FLAG_VMDQ_ENABLED | IXGBE_FLAG_SRIOV_ENABLED;
 	adapter->ring_feature[RING_F_VMDQ].limit = limit + 1;
-	adapter->ring_feature[RING_F_RSS].limit = vdev->num_tx_queues;
 
-	/* Force reinit of ring allocation with VMDQ enabled */
-	err = ixgbe_setup_tc(pdev, adapter->hw_tcs);
-	if (err)
-		goto fwd_add_err;
 	fwd_adapter->pool = pool;
 	fwd_adapter->real_adapter = adapter;
 
-	if (netif_running(pdev)) {
+	/* Force reinit of ring allocation with VMDQ enabled */
+	err = ixgbe_setup_tc(pdev, adapter->hw_tcs);
+
+	if (!err && netif_running(pdev))
 		err = ixgbe_fwd_ring_up(vdev, fwd_adapter);
-		if (err)
-			goto fwd_add_err;
-		netif_tx_start_all_queues(vdev);
-	}
 
-	return fwd_adapter;
-fwd_add_err:
+	if (!err)
+		return fwd_adapter;
+
 	/* unwind counter and free adapter struct */
 	netdev_info(pdev,
 		    "%s: dfwd hardware acceleration failed\n", vdev->name);


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 13/16] ixgbe: avoid bringing rings up/down as macvlans are added/removed
  2017-11-22 18:56 [Intel-wired-lan] [jkirsher/next-queue PATCH 00/16] ixgbe/fm10k: macvlan fixes Alexander Duyck
                   ` (11 preceding siblings ...)
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 12/16] ixgbe: Do not manipulate macvlan Tx queues when performing macvlan offload Alexander Duyck
@ 2017-11-22 18:57 ` Alexander Duyck
  2017-11-29 17:09   ` Bowers, AndrewX
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 14/16] ixgbe: Fix handling of macvlan Tx offload Alexander Duyck
                   ` (2 subsequent siblings)
  15 siblings, 1 reply; 33+ messages in thread
From: Alexander Duyck @ 2017-11-22 18:57 UTC (permalink / raw)
  To: intel-wired-lan

From: Alexander Duyck <alexander.h.duyck@intel.com>

This change makes it so that instead of bringing rings up/down for various
we just update the netdev pointer for the Rx ring and set or clear the MAC
filter for the interface. By doing it this way we can avoid a number of
races and issues in the code as things were getting messy with the macvlan
clean-up racing with the interface clean-up to bring the rings down on
shutdown.

With this change we opt to leave the rings owned by the PF interface for
both Tx and Rx and just direct the packets once they are received to the
macvlan netdev.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  |   28 +++++--
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |  102 +++++++++++++------------
 2 files changed, 72 insertions(+), 58 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index dc7f3ef2957b..cfe5a6af04d0 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -46,7 +46,7 @@ static bool ixgbe_cache_ring_dcb_sriov(struct ixgbe_adapter *adapter)
 #endif /* IXGBE_FCOE */
 	struct ixgbe_ring_feature *vmdq = &adapter->ring_feature[RING_F_VMDQ];
 	int i;
-	u16 reg_idx;
+	u16 reg_idx, pool;
 	u8 tcs = adapter->hw_tcs;
 
 	/* verify we have DCB queueing enabled before proceeding */
@@ -58,12 +58,16 @@ static bool ixgbe_cache_ring_dcb_sriov(struct ixgbe_adapter *adapter)
 		return false;
 
 	/* start at VMDq register offset for SR-IOV enabled setups */
+	pool = 0;
 	reg_idx = vmdq->offset * __ALIGN_MASK(1, ~vmdq->mask);
-	for (i = 0; i < adapter->num_rx_queues; i++, reg_idx++) {
+	for (i = 0, pool = 0; i < adapter->num_rx_queues; i++, reg_idx++) {
 		/* If we are greater than indices move to next pool */
-		if ((reg_idx & ~vmdq->mask) >= tcs)
+		if ((reg_idx & ~vmdq->mask) >= tcs) {
+			pool++;
 			reg_idx = __ALIGN_MASK(reg_idx, ~vmdq->mask);
+		}
 		adapter->rx_ring[i]->reg_idx = reg_idx;
+		adapter->rx_ring[i]->netdev = pool ? NULL : adapter->netdev;
 	}
 
 	reg_idx = vmdq->offset * __ALIGN_MASK(1, ~vmdq->mask);
@@ -92,6 +96,7 @@ static bool ixgbe_cache_ring_dcb_sriov(struct ixgbe_adapter *adapter)
 		for (i = fcoe->offset; i < adapter->num_rx_queues; i++) {
 			reg_idx = __ALIGN_MASK(reg_idx, ~vmdq->mask) + fcoe_tc;
 			adapter->rx_ring[i]->reg_idx = reg_idx;
+			adapter->rx_ring[i]->netdev = adapter->netdev;
 			reg_idx++;
 		}
 
@@ -182,6 +187,7 @@ static bool ixgbe_cache_ring_dcb(struct ixgbe_adapter *adapter)
 		for (i = 0; i < rss_i; i++, tx_idx++, rx_idx++) {
 			adapter->tx_ring[offset + i]->reg_idx = tx_idx;
 			adapter->rx_ring[offset + i]->reg_idx = rx_idx;
+			adapter->rx_ring[offset + i]->netdev = adapter->netdev;
 			adapter->tx_ring[offset + i]->dcb_tc = tc;
 			adapter->rx_ring[offset + i]->dcb_tc = tc;
 		}
@@ -206,14 +212,15 @@ static bool ixgbe_cache_ring_sriov(struct ixgbe_adapter *adapter)
 #endif /* IXGBE_FCOE */
 	struct ixgbe_ring_feature *vmdq = &adapter->ring_feature[RING_F_VMDQ];
 	struct ixgbe_ring_feature *rss = &adapter->ring_feature[RING_F_RSS];
+	u16 reg_idx, pool;
 	int i;
-	u16 reg_idx;
 
 	/* only proceed if VMDq is enabled */
 	if (!(adapter->flags & IXGBE_FLAG_VMDQ_ENABLED))
 		return false;
 
 	/* start at VMDq register offset for SR-IOV enabled setups */
+	pool = 0;
 	reg_idx = vmdq->offset * __ALIGN_MASK(1, ~vmdq->mask);
 	for (i = 0; i < adapter->num_rx_queues; i++, reg_idx++) {
 #ifdef IXGBE_FCOE
@@ -222,15 +229,20 @@ static bool ixgbe_cache_ring_sriov(struct ixgbe_adapter *adapter)
 			break;
 #endif
 		/* If we are greater than indices move to next pool */
-		if ((reg_idx & ~vmdq->mask) >= rss->indices)
+		if ((reg_idx & ~vmdq->mask) >= rss->indices) {
+			pool++;
 			reg_idx = __ALIGN_MASK(reg_idx, ~vmdq->mask);
+		}
 		adapter->rx_ring[i]->reg_idx = reg_idx;
+		adapter->rx_ring[i]->netdev = pool ? NULL : adapter->netdev;
 	}
 
 #ifdef IXGBE_FCOE
 	/* FCoE uses a linear block of queues so just assigning 1:1 */
-	for (; i < adapter->num_rx_queues; i++, reg_idx++)
+	for (; i < adapter->num_rx_queues; i++, reg_idx++) {
 		adapter->rx_ring[i]->reg_idx = reg_idx;
+		adapter->rx_ring[i]->netdev = adapter->netdev;
+	}
 
 #endif
 	reg_idx = vmdq->offset * __ALIGN_MASK(1, ~vmdq->mask);
@@ -267,8 +279,10 @@ static bool ixgbe_cache_ring_rss(struct ixgbe_adapter *adapter)
 {
 	int i, reg_idx;
 
-	for (i = 0; i < adapter->num_rx_queues; i++)
+	for (i = 0; i < adapter->num_rx_queues; i++) {
 		adapter->rx_ring[i]->reg_idx = i;
+		adapter->rx_ring[i]->netdev = adapter->netdev;
+	}
 	for (i = 0, reg_idx = 0; i < adapter->num_tx_queues; i++, reg_idx++)
 		adapter->tx_ring[i]->reg_idx = reg_idx;
 	for (i = 0; i < adapter->num_xdp_queues; i++, reg_idx++)
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index c21fd98bd45a..bcd05761b8e1 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -1922,10 +1922,13 @@ static bool ixgbe_cleanup_headers(struct ixgbe_ring *rx_ring,
 	if (IS_ERR(skb))
 		return true;
 
-	/* verify that the packet does not have any known errors */
-	if (unlikely(ixgbe_test_staterr(rx_desc,
-					IXGBE_RXDADV_ERR_FRAME_ERR_MASK) &&
-	    !(netdev->features & NETIF_F_RXALL))) {
+	/* Verify netdev is present, and that packet does not have any
+	 * errors that would be unacceptable to the netdev.
+	 */
+	if (!netdev ||
+	    (unlikely(ixgbe_test_staterr(rx_desc,
+					 IXGBE_RXDADV_ERR_FRAME_ERR_MASK) &&
+	     !(netdev->features & NETIF_F_RXALL)))) {
 		dev_kfree_skb_any(skb);
 		return true;
 	}
@@ -5335,33 +5338,6 @@ static void ixgbe_clean_rx_ring(struct ixgbe_ring *rx_ring)
 	rx_ring->next_to_use = 0;
 }
 
-static void ixgbe_disable_fwd_ring(struct ixgbe_fwd_adapter *vadapter,
-				   struct ixgbe_ring *rx_ring)
-{
-	struct ixgbe_adapter *adapter = vadapter->real_adapter;
-
-	/* shutdown specific queue receive and wait for dma to settle */
-	ixgbe_disable_rx_queue(adapter, rx_ring);
-	usleep_range(10000, 20000);
-	ixgbe_irq_disable_queues(adapter, BIT_ULL(rx_ring->queue_index));
-	ixgbe_clean_rx_ring(rx_ring);
-}
-
-static int ixgbe_fwd_ring_down(struct net_device *vdev,
-			       struct ixgbe_fwd_adapter *accel)
-{
-	struct ixgbe_adapter *adapter = accel->real_adapter;
-	unsigned int rxbase = accel->rx_base_queue;
-	int i;
-
-	for (i = 0; i < adapter->num_rx_queues_per_pool; i++) {
-		ixgbe_disable_fwd_ring(accel, adapter->rx_ring[rxbase + i]);
-		adapter->rx_ring[rxbase + i]->netdev = adapter->netdev;
-	}
-
-	return 0;
-}
-
 static int ixgbe_fwd_ring_up(struct net_device *vdev,
 			     struct ixgbe_fwd_adapter *accel)
 {
@@ -5381,25 +5357,26 @@ static int ixgbe_fwd_ring_up(struct net_device *vdev,
 	accel->tx_base_queue = baseq;
 
 	for (i = 0; i < adapter->num_rx_queues_per_pool; i++)
-		ixgbe_disable_fwd_ring(accel, adapter->rx_ring[baseq + i]);
-
-	for (i = 0; i < adapter->num_rx_queues_per_pool; i++) {
 		adapter->rx_ring[baseq + i]->netdev = vdev;
-		ixgbe_configure_rx_ring(adapter, adapter->rx_ring[baseq + i]);
-	}
+
+	/* Guarantee all rings are updated before we update the
+	 * MAC address filter.
+	 */
+	wmb();
 
 	/* ixgbe_add_mac_filter will return an index if it succeeds, so we
 	 * need to only treat it as an error value if it is negative.
 	 */
 	err = ixgbe_add_mac_filter(adapter, vdev->dev_addr,
 				   VMDQ_P(accel->pool));
-	if (err < 0)
-		goto fwd_queue_err;
+	if (err >= 0) {
+		ixgbe_macvlan_set_rx_mode(vdev, accel->pool, adapter);
+		return 0;
+	}
+
+	for (i = 0; i < adapter->num_rx_queues_per_pool; i++)
+		adapter->rx_ring[baseq + i]->netdev = NULL;
 
-	ixgbe_macvlan_set_rx_mode(vdev, VMDQ_P(accel->pool), adapter);
-	return 0;
-fwd_queue_err:
-	ixgbe_fwd_ring_down(vdev, accel);
 	return err;
 }
 
@@ -9791,15 +9768,38 @@ static void *ixgbe_fwd_add(struct net_device *pdev, struct net_device *vdev)
 
 static void ixgbe_fwd_del(struct net_device *pdev, void *priv)
 {
-	struct ixgbe_fwd_adapter *fwd_adapter = priv;
-	struct ixgbe_adapter *adapter = fwd_adapter->real_adapter;
-	unsigned int limit;
+	struct ixgbe_fwd_adapter *accel = priv;
+	struct ixgbe_adapter *adapter = accel->real_adapter;
+	unsigned int rxbase = accel->rx_base_queue;
+	unsigned int limit, i;
 
-	clear_bit(fwd_adapter->pool, adapter->fwd_bitmask);
+	/* delete unicast filter associated with offloaded interface */
+	ixgbe_del_mac_filter(adapter, accel->netdev->dev_addr,
+			     VMDQ_P(accel->pool));
 
+	/* disable ability to receive packets for this pool */
+	IXGBE_WRITE_REG(&adapter->hw, IXGBE_VMOLR(accel->pool), 0);
+
+	/* Allow remaining Rx packets to get flushed out of the
+	 * Rx FIFO before we drop the netdev for the ring.
+	 */
+	usleep_range(10000, 20000);
+
+	for (i = 0; i < adapter->num_rx_queues_per_pool; i++) {
+		struct ixgbe_ring *ring = adapter->rx_ring[rxbase + i];
+		struct ixgbe_q_vector *qv = ring->q_vector;
+
+		/* Make sure we aren't processing any packets and clear
+		 * netdev to shut down the ring.
+		 */
+		if (netif_running(adapter->netdev))
+			napi_synchronize(&qv->napi);
+		ring->netdev = NULL;
+	}
+
+	clear_bit(accel->pool, adapter->fwd_bitmask);
 	limit = find_last_bit(adapter->fwd_bitmask, adapter->num_rx_pools);
 	adapter->ring_feature[RING_F_VMDQ].limit = limit + 1;
-	ixgbe_fwd_ring_down(fwd_adapter->netdev, fwd_adapter);
 
 	/* go back to full RSS if we're done with our VMQs */
 	if (adapter->ring_feature[RING_F_VMDQ].limit == 1) {
@@ -9813,11 +9813,11 @@ static void ixgbe_fwd_del(struct net_device *pdev, void *priv)
 
 	ixgbe_setup_tc(pdev, adapter->hw_tcs);
 	netdev_dbg(pdev, "pool %i:%i queues %i:%i\n",
-		   fwd_adapter->pool, adapter->num_rx_pools,
-		   fwd_adapter->rx_base_queue,
-		   fwd_adapter->rx_base_queue +
+		   accel->pool, adapter->num_rx_pools,
+		   accel->rx_base_queue,
+		   accel->rx_base_queue +
 		   adapter->num_rx_queues_per_pool);
-	kfree(fwd_adapter);
+	kfree(accel);
 }
 
 #define IXGBE_MAX_MAC_HDR_LEN		127


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 14/16] ixgbe: Fix handling of macvlan Tx offload
  2017-11-22 18:56 [Intel-wired-lan] [jkirsher/next-queue PATCH 00/16] ixgbe/fm10k: macvlan fixes Alexander Duyck
                   ` (12 preceding siblings ...)
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 13/16] ixgbe: avoid bringing rings up/down as macvlans are added/removed Alexander Duyck
@ 2017-11-22 18:57 ` Alexander Duyck
  2017-11-29 17:10   ` Bowers, AndrewX
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 15/16] net: Cap number of queues even with accel_priv Alexander Duyck
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 16/16] fm10k: Fix configuration for macvlan offload Alexander Duyck
  15 siblings, 1 reply; 33+ messages in thread
From: Alexander Duyck @ 2017-11-22 18:57 UTC (permalink / raw)
  To: intel-wired-lan

From: Alexander Duyck <alexander.h.duyck@intel.com>

This update makes it so that we report the actual number of Tx queues via
real_num_tx_queues but are still restricted to RSS on only the first pool
by setting num_tc equal to 1. Doing this locks us into only having the
ability to setup XPS on the queues in that pool, and only those queues
should be used for transmitting anything other than macvlan traffic.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  |    4 ++++
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   20 ++++++++++----------
 2 files changed, 14 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index cfe5a6af04d0..b3c282d09b18 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -619,6 +619,10 @@ static bool ixgbe_set_sriov_queues(struct ixgbe_adapter *adapter)
 	}
 
 #endif
+	/* populate TC0 for use by pool 0 */
+	netdev_set_tc_queue(adapter->netdev, 0,
+			    adapter->num_rx_queues_per_pool, 0);
+
 	return true;
 }
 
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index bcd05761b8e1..dba69c0bc644 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -6563,20 +6563,12 @@ int ixgbe_open(struct net_device *netdev)
 		goto err_req_irq;
 
 	/* Notify the stack of the actual queue counts. */
-	if (adapter->num_rx_pools > 1)
-		queues = adapter->num_rx_queues_per_pool;
-	else
-		queues = adapter->num_tx_queues;
-
+	queues = adapter->num_tx_queues;
 	err = netif_set_real_num_tx_queues(netdev, queues);
 	if (err)
 		goto err_set_queues;
 
-	if (adapter->num_rx_pools > 1 &&
-	    adapter->num_rx_queues > IXGBE_MAX_L2A_QUEUES)
-		queues = IXGBE_MAX_L2A_QUEUES;
-	else
-		queues = adapter->num_rx_queues;
+	queues = adapter->num_rx_queues;
 	err = netif_set_real_num_rx_queues(netdev, queues);
 	if (err)
 		goto err_set_queues;
@@ -8806,6 +8798,14 @@ int ixgbe_setup_tc(struct net_device *dev, u8 tc)
 	} else {
 		netdev_reset_tc(dev);
 
+		/* To support macvlan offload we have to use num_tc to
+		 * restrict the queues that can be used by the device.
+		 * By doing this we can avoid reporting a false number of
+		 * queues.
+		 */
+		if (!tc && adapter->num_rx_pools > 1)
+			netdev_set_num_tc(dev, 1);
+
 		if (adapter->hw.mac.type == ixgbe_mac_82598EB)
 			adapter->hw.fc.requested_mode = adapter->last_lfc_mode;
 


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 15/16] net: Cap number of queues even with accel_priv
  2017-11-22 18:56 [Intel-wired-lan] [jkirsher/next-queue PATCH 00/16] ixgbe/fm10k: macvlan fixes Alexander Duyck
                   ` (13 preceding siblings ...)
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 14/16] ixgbe: Fix handling of macvlan Tx offload Alexander Duyck
@ 2017-11-22 18:57 ` Alexander Duyck
  2017-11-29 17:11   ` Bowers, AndrewX
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 16/16] fm10k: Fix configuration for macvlan offload Alexander Duyck
  15 siblings, 1 reply; 33+ messages in thread
From: Alexander Duyck @ 2017-11-22 18:57 UTC (permalink / raw)
  To: intel-wired-lan

From: Alexander Duyck <alexander.h.duyck@intel.com>

With the recent fix to ixgbe we can cap the number of queues always
regardless of if accel_priv is being used or not since the actual number of
queues are being reported via real_num_tx_queues.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
 net/core/dev.c |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 8ee29f4f5fa9..8ad5634e92f2 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3375,8 +3375,7 @@ struct netdev_queue *netdev_pick_tx(struct net_device *dev,
 		else
 			queue_index = __netdev_pick_tx(dev, skb);
 
-		if (!accel_priv)
-			queue_index = netdev_cap_txqueue(dev, queue_index);
+		queue_index = netdev_cap_txqueue(dev, queue_index);
 	}
 
 	skb_set_queue_mapping(skb, queue_index);


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 16/16] fm10k: Fix configuration for macvlan offload
  2017-11-22 18:56 [Intel-wired-lan] [jkirsher/next-queue PATCH 00/16] ixgbe/fm10k: macvlan fixes Alexander Duyck
                   ` (14 preceding siblings ...)
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 15/16] net: Cap number of queues even with accel_priv Alexander Duyck
@ 2017-11-22 18:57 ` Alexander Duyck
  2018-01-09 20:40   ` Singh, Krishneil K
  15 siblings, 1 reply; 33+ messages in thread
From: Alexander Duyck @ 2017-11-22 18:57 UTC (permalink / raw)
  To: intel-wired-lan

From: Alexander Duyck <alexander.h.duyck@intel.com>

The fm10k driver didn't work correctly when macvlan offload was enabled.
Specifically what would occur is that we would see no unicast packets being
received. This was traced down to us not correctly configuring the default
VLAN ID for the port and defaulting to 0.

To correct this we either use the default ID provided by the switch or
simply use 1. With that we are able to pass and receive traffic without any
issues.

In addition we were not repopulating the filter table following a reset. To
correct that I have added a bit of code to fm10k_restore_rx_state that will
repopulate the Rx filter configuration for the macvlan interfaces.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
 drivers/net/ethernet/intel/fm10k/fm10k_netdev.c |   25 ++++++++++++++++++++---
 1 file changed, 22 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c b/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c
index adc62fb38c49..6d9088956407 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c
@@ -1182,9 +1182,10 @@ static void fm10k_set_rx_mode(struct net_device *dev)
 
 void fm10k_restore_rx_state(struct fm10k_intfc *interface)
 {
+	struct fm10k_l2_accel *l2_accel = interface->l2_accel;
 	struct net_device *netdev = interface->netdev;
 	struct fm10k_hw *hw = &interface->hw;
-	int xcast_mode;
+	int xcast_mode, i;
 	u16 vid, glort;
 
 	/* record glort for this interface */
@@ -1234,6 +1235,24 @@ void fm10k_restore_rx_state(struct fm10k_intfc *interface)
 	__dev_uc_sync(netdev, fm10k_uc_sync, fm10k_uc_unsync);
 	__dev_mc_sync(netdev, fm10k_mc_sync, fm10k_mc_unsync);
 
+	/* synchronize macvlan addresses */
+	if (l2_accel) {
+		for (i = 0; i < l2_accel->size; i++) {
+			struct net_device *sdev = l2_accel->macvlan[i];
+
+			if (!sdev)
+				continue;
+
+			glort = l2_accel->dglort + 1 + i;
+
+			hw->mac.ops.update_xcast_mode(hw, glort,
+						      FM10K_XCAST_MODE_MULTI);
+			fm10k_queue_mac_request(interface, glort,
+						sdev->dev_addr,
+						hw->mac.default_vid, true);
+		}
+	}
+
 	fm10k_mbx_unlock(interface);
 
 	/* record updated xcast mode state */
@@ -1490,7 +1509,7 @@ static void *fm10k_dfwd_add_station(struct net_device *dev,
 		hw->mac.ops.update_xcast_mode(hw, glort,
 					      FM10K_XCAST_MODE_MULTI);
 		fm10k_queue_mac_request(interface, glort, sdev->dev_addr,
-					0, true);
+					hw->mac.default_vid, true);
 	}
 
 	fm10k_mbx_unlock(interface);
@@ -1530,7 +1549,7 @@ static void fm10k_dfwd_del_station(struct net_device *dev, void *priv)
 		hw->mac.ops.update_xcast_mode(hw, glort,
 					      FM10K_XCAST_MODE_NONE);
 		fm10k_queue_mac_request(interface, glort, sdev->dev_addr,
-					0, false);
+					hw->mac.default_vid, false);
 	}
 
 	fm10k_mbx_unlock(interface);


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 01/16] ixgbe: Fix interaction between SR-IOV and macvlan offload
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 01/16] ixgbe: Fix interaction between SR-IOV and macvlan offload Alexander Duyck
@ 2017-11-29 16:40   ` Bowers, AndrewX
  0 siblings, 0 replies; 33+ messages in thread
From: Bowers, AndrewX @ 2017-11-29 16:40 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Alexander Duyck
> Sent: Wednesday, November 22, 2017 10:56 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [jkirsher/next-queue PATCH 01/16] ixgbe: Fix
> interaction between SR-IOV and macvlan offload
> 
> From: Alexander Duyck <alexander.h.duyck@intel.com>
> 
> When SR-IOV was enabled the macvlan offload was configuring several filters
> with the wrong pool value. This would result in the macvlan interfaces not
> being able to receive traffic that had to pass over the physical interface.
> 
> To fix it wrap the pool argument in the VMDQ_P macro which will add the
> necessary offset to get to the actual VMDq pool
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   12 +++++-------
>  1 file changed, 5 insertions(+), 7 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 02/16] ixgbe: Perform reinit any time number of VFs change
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 02/16] ixgbe: Perform reinit any time number of VFs change Alexander Duyck
@ 2017-11-29 17:00   ` Bowers, AndrewX
  0 siblings, 0 replies; 33+ messages in thread
From: Bowers, AndrewX @ 2017-11-29 17:00 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Alexander Duyck
> Sent: Wednesday, November 22, 2017 10:56 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [jkirsher/next-queue PATCH 02/16] ixgbe: Perform
> reinit any time number of VFs change
> 
> From: Alexander Duyck <alexander.h.duyck@intel.com>
> 
> If the number of VFs are changed we need to reinitialize the part since the
> offset for the device and the number of pools will be incorrect. Without this
> change we can end up seeing Tx hangs and dropped Rx frames for incoming
> traffic.
> 
> In addition we should drop the code that is arbitrarily changing the default
> pool and queue configuration. Instead we should wait until the port is reset
> and reconfigured via ixgbe_sriov_reinit.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c |   19 +++----------------
>  1 file changed, 3 insertions(+), 16 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 03/16] ixgbe: Add support for macvlan offload RSS on X550 and clean-up pool handling
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 03/16] ixgbe: Add support for macvlan offload RSS on X550 and clean-up pool handling Alexander Duyck
@ 2017-11-29 17:01   ` Bowers, AndrewX
  0 siblings, 0 replies; 33+ messages in thread
From: Bowers, AndrewX @ 2017-11-29 17:01 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Alexander Duyck
> Sent: Wednesday, November 22, 2017 10:56 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [jkirsher/next-queue PATCH 03/16] ixgbe: Add
> support for macvlan offload RSS on X550 and clean-up pool handling
> 
> From: Alexander Duyck <alexander.h.duyck@intel.com>
> 
> In order for RSS to work on the macvlan pools of the X550 we need to
> populate the MRQC, RETA, and RSS key values for each pool. This patch
> makes it so that we now take care of that.
> 
> In addition I have dropped the macvlan specific configuration of psrtype since
> it is redundant with the code that already exists for configuring this value.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   62 ++++++++++------------
> ---
>  1 file changed, 25 insertions(+), 37 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 04/16] ixgbe: There is no need to update num_rx_pools in L2 fwd offload
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 04/16] ixgbe: There is no need to update num_rx_pools in L2 fwd offload Alexander Duyck
@ 2017-11-29 17:02   ` Bowers, AndrewX
  0 siblings, 0 replies; 33+ messages in thread
From: Bowers, AndrewX @ 2017-11-29 17:02 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Alexander Duyck
> Sent: Wednesday, November 22, 2017 10:57 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [jkirsher/next-queue PATCH 04/16] ixgbe: There is
> no need to update num_rx_pools in L2 fwd offload
> 
> From: Alexander Duyck <alexander.h.duyck@intel.com>
> 
> The num_rx_pools value is overwritten when we reinitialize the queue
> configuration. In reality we shouldn't need to be updating the value since it is
> redone every time we call into ixgbe_setup_tc so for now just drop the spots
> where we were incrementing or decrementing the value.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  |    2 +-
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |    3 ---
>  2 files changed, 1 insertion(+), 4 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 05/16] ixgbe: Fix limitations on macvlan so we can support up to 63 offloaded devices
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 05/16] ixgbe: Fix limitations on macvlan so we can support up to 63 offloaded devices Alexander Duyck
@ 2017-11-29 17:03   ` Bowers, AndrewX
  0 siblings, 0 replies; 33+ messages in thread
From: Bowers, AndrewX @ 2017-11-29 17:03 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Alexander Duyck
> Sent: Wednesday, November 22, 2017 10:57 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [jkirsher/next-queue PATCH 05/16] ixgbe: Fix
> limitations on macvlan so we can support up to 63 offloaded devices
> 
> From: Alexander Duyck <alexander.h.duyck@intel.com>
> 
> This change is a fix of the macvlan offload so that we correctly handle
> macvlan offloaded devices. Specificaly we were configuring our limits based
> on the assumption that we were going to max out the RSS indices for every
> mode. As a result when we went to 15 or more macvlan interfaces we were
> forced into the 2 queue RSS mode on VFs even though they could have still
> supported 4.
> 
> This change splits the logic up so that we limit either the total number of
> macvlan instances if DCB is enabled, or limit the number of RSS queues used
> per macvlan (instead of per pool) if SR-IOV is enabled. By doing this we can
> make best use of the part.
> 
> In addition I have increased the maximum number of supported interfaces to
> 63 with one queue per offloaded interface as this more closely reflects the
> actual values supported by the interface.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe.h       |    6 ++--
>  drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c   |    9 +++++-
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c  |   35 ++++++++++-----------
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c |   27 ++++++-------------
>  4 files changed, 34 insertions(+), 43 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 06/16] ixgbe: Use ring values to test for Tx pending
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 06/16] ixgbe: Use ring values to test for Tx pending Alexander Duyck
@ 2017-11-29 17:04   ` Bowers, AndrewX
  0 siblings, 0 replies; 33+ messages in thread
From: Bowers, AndrewX @ 2017-11-29 17:04 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Alexander Duyck
> Sent: Wednesday, November 22, 2017 10:57 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [jkirsher/next-queue PATCH 06/16] ixgbe: Use ring
> values to test for Tx pending
> 
> From: Alexander Duyck <alexander.h.duyck@intel.com>
> 
> This patch simplifies the check for Tx pending traffic and makes it more
> holistic as there being any difference between next_to_use and
> next_to_clean is much more informative than if head and tail are equal, as it
> is possible for us to either not update tail, or not be notified of completed
> work in which case next_to_clean would not be equal to head.
> 
> In addition the simplification makes it so that we don't have to read hardware
> which allows us to drop a number of variables that were previously being
> used in the call.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   20 ++++----------------
>  1 file changed, 4 insertions(+), 16 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 07/16] ixgbe: Drop l2_accel_priv data pointer from ring struct
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 07/16] ixgbe: Drop l2_accel_priv data pointer from ring struct Alexander Duyck
@ 2017-11-29 17:04   ` Bowers, AndrewX
  0 siblings, 0 replies; 33+ messages in thread
From: Bowers, AndrewX @ 2017-11-29 17:04 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Alexander Duyck
> Sent: Wednesday, November 22, 2017 10:57 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [jkirsher/next-queue PATCH 07/16] ixgbe: Drop
> l2_accel_priv data pointer from ring struct
> 
> From: Alexander Duyck <alexander.h.duyck@intel.com>
> 
> The l2 acceleration private pointer isn't needed in the ring struct. It isn't really
> used anywhere other than to test and see if we are supporting an offloaded
> macvlan netdev, and it is much easier to test netdev for not being ixgbe
> based to verify that.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe.h      |    1 -
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   23 +++++++++++++-------
> ---
>  2 files changed, 13 insertions(+), 11 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 08/16] ixgbe: Assume provided MAC filter has been verified by macvlan
  2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 08/16] ixgbe: Assume provided MAC filter has been verified by macvlan Alexander Duyck
@ 2017-11-29 17:06   ` Bowers, AndrewX
  0 siblings, 0 replies; 33+ messages in thread
From: Bowers, AndrewX @ 2017-11-29 17:06 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Alexander Duyck
> Sent: Wednesday, November 22, 2017 10:57 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [jkirsher/next-queue PATCH 08/16] ixgbe: Assume
> provided MAC filter has been verified by macvlan
> 
> From: Alexander Duyck <alexander.h.duyck@intel.com>
> 
> The macvlan driver itself will validate the MAC address that is configured for a
> given interface. There is no need for us to verify it again.
> 
> Instead we should be checking to verify that we actuall allocate the filter and
> have not run out of resources to configure a MAC rule in our filter table.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   12 ++++++++----
>  1 file changed, 8 insertions(+), 4 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 09/16] ixgbe: Default to 1 pool always being allocated
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 09/16] ixgbe: Default to 1 pool always being allocated Alexander Duyck
@ 2017-11-29 17:07   ` Bowers, AndrewX
  0 siblings, 0 replies; 33+ messages in thread
From: Bowers, AndrewX @ 2017-11-29 17:07 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Alexander Duyck
> Sent: Wednesday, November 22, 2017 10:57 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [jkirsher/next-queue PATCH 09/16] ixgbe: Default
> to 1 pool always being allocated
> 
> From: Alexander Duyck <alexander.h.duyck@intel.com>
> 
> We might as well configure the limit to default to 1 pool always for the
> interface. This accounts for the fact that the PF counts as 1 pool if SR-IOV is
> enabled, and in general we are always running in 1 pool mode when RSS or
> DCB is enabled as well, though we don't need to actually evaulate any of the
> VMDq features in those cases.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c  |    1 +
>  drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c |    7 ++-----
>  2 files changed, 3 insertions(+), 5 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 10/16] ixgbe: Don't assume dev->num_tc is equal to hardware TC config
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 10/16] ixgbe: Don't assume dev->num_tc is equal to hardware TC config Alexander Duyck
@ 2017-11-29 17:08   ` Bowers, AndrewX
  0 siblings, 0 replies; 33+ messages in thread
From: Bowers, AndrewX @ 2017-11-29 17:08 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Alexander Duyck
> Sent: Wednesday, November 22, 2017 10:57 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [jkirsher/next-queue PATCH 10/16] ixgbe: Don't
> assume dev->num_tc is equal to hardware TC config
> 
> From: Alexander Duyck <alexander.h.duyck@intel.com>
> 
> The code throughout ixgbe was assuming that dev->num_tc was populated
> and configured with the driver, when in fact this can be configured via
> mqprio without any hardware coordination other than restricting us to the
> real number of Tx queues we advertise.
> 
> Instead of handling things this way we need to keep a local copy of the
> number of TCs in use so that we don't accidently pull in the TC configuration
> from mqprio when it is configured in software mode.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe.h         |    1 +
>  drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c  |    2 +-
>  drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c |    6 +++---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c     |   17 ++++++++---------
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c    |   22 ++++++++++++-------
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c   |    8 ++++----
>  6 files changed, 29 insertions(+), 27 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>


^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 11/16] ixgbe/fm10k: Record macvlan stats instead of Rx queue for macvlan offloaded rings
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 11/16] ixgbe/fm10k: Record macvlan stats instead of Rx queue for macvlan offloaded rings Alexander Duyck
@ 2017-11-29 17:08   ` Bowers, AndrewX
  0 siblings, 0 replies; 33+ messages in thread
From: Bowers, AndrewX @ 2017-11-29 17:08 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Alexander Duyck
> Sent: Wednesday, November 22, 2017 10:57 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [jkirsher/next-queue PATCH 11/16] ixgbe/fm10k:
> Record macvlan stats instead of Rx queue for macvlan offloaded rings
> 
> From: Alexander Duyck <alexander.h.duyck@intel.com>
> 
> We shouldn't be recording the Rx queue on macvlan offloaded frames since
> the macvlan is normally brought up as a single queue device, and it will trigger
> warnings for RPS if we have recorded queue IDs larger than the
> "real_num_rx_queues" value recorded for the device.
> 
> Instead we should be recording the macvlan statistics since we are bypassing
> the normal macvlan statistics that would have been generated by the receive
> path.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
>  drivers/net/ethernet/intel/fm10k/fm10k_main.c |   14 ++++++--------
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   10 ++++++++--
>  2 files changed, 14 insertions(+), 10 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 12/16] ixgbe: Do not manipulate macvlan Tx queues when performing macvlan offload
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 12/16] ixgbe: Do not manipulate macvlan Tx queues when performing macvlan offload Alexander Duyck
@ 2017-11-29 17:09   ` Bowers, AndrewX
  0 siblings, 0 replies; 33+ messages in thread
From: Bowers, AndrewX @ 2017-11-29 17:09 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Alexander Duyck
> Sent: Wednesday, November 22, 2017 10:57 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [jkirsher/next-queue PATCH 12/16] ixgbe: Do not
> manipulate macvlan Tx queues when performing macvlan offload
> 
> From: Alexander Duyck <alexander.h.duyck@intel.com>
> 
> We should not be stopping/starting the upper devices Tx queues when
> handling a macvlan offload. Instead we should be stopping and starting traffic
> on our own queues.
> 
> In order to prevent us from doing this I am updating the code so that we no
> longer change the queue configuration on the upper device, nor do we
> update the queue_index on our own device. Instead we can just use the
> queue index for our local device and not update the netdev in the case of
> the transmit rings.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  |   12 --
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |  121 +++++-------------------
> -
>  2 files changed, 25 insertions(+), 108 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 13/16] ixgbe: avoid bringing rings up/down as macvlans are added/removed
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 13/16] ixgbe: avoid bringing rings up/down as macvlans are added/removed Alexander Duyck
@ 2017-11-29 17:09   ` Bowers, AndrewX
  0 siblings, 0 replies; 33+ messages in thread
From: Bowers, AndrewX @ 2017-11-29 17:09 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Alexander Duyck
> Sent: Wednesday, November 22, 2017 10:57 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [jkirsher/next-queue PATCH 13/16] ixgbe: avoid
> bringing rings up/down as macvlans are added/removed
> 
> From: Alexander Duyck <alexander.h.duyck@intel.com>
> 
> This change makes it so that instead of bringing rings up/down for various we
> just update the netdev pointer for the Rx ring and set or clear the MAC filter
> for the interface. By doing it this way we can avoid a number of races and
> issues in the code as things were getting messy with the macvlan clean-up
> racing with the interface clean-up to bring the rings down on shutdown.
> 
> With this change we opt to leave the rings owned by the PF interface for
> both Tx and Rx and just direct the packets once they are received to the
> macvlan netdev.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  |   28 +++++--
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |  102 +++++++++++++------
> ------
>  2 files changed, 72 insertions(+), 58 deletions(-)


Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 14/16] ixgbe: Fix handling of macvlan Tx offload
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 14/16] ixgbe: Fix handling of macvlan Tx offload Alexander Duyck
@ 2017-11-29 17:10   ` Bowers, AndrewX
  0 siblings, 0 replies; 33+ messages in thread
From: Bowers, AndrewX @ 2017-11-29 17:10 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Alexander Duyck
> Sent: Wednesday, November 22, 2017 10:58 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [jkirsher/next-queue PATCH 14/16] ixgbe: Fix
> handling of macvlan Tx offload
> 
> From: Alexander Duyck <alexander.h.duyck@intel.com>
> 
> This update makes it so that we report the actual number of Tx queues via
> real_num_tx_queues but are still restricted to RSS on only the first pool by
> setting num_tc equal to 1. Doing this locks us into only having the ability to
> setup XPS on the queues in that pool, and only those queues should be used
> for transmitting anything other than macvlan traffic.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  |    4 ++++
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   20 ++++++++++----------
>  2 files changed, 14 insertions(+), 10 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 15/16] net: Cap number of queues even with accel_priv
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 15/16] net: Cap number of queues even with accel_priv Alexander Duyck
@ 2017-11-29 17:11   ` Bowers, AndrewX
  0 siblings, 0 replies; 33+ messages in thread
From: Bowers, AndrewX @ 2017-11-29 17:11 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Alexander Duyck
> Sent: Wednesday, November 22, 2017 10:58 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [jkirsher/next-queue PATCH 15/16] net: Cap
> number of queues even with accel_priv
> 
> From: Alexander Duyck <alexander.h.duyck@intel.com>
> 
> With the recent fix to ixgbe we can cap the number of queues always
> regardless of if accel_priv is being used or not since the actual number of
> queues are being reported via real_num_tx_queues.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
>  net/core/dev.c |    3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 33+ messages in thread

* [Intel-wired-lan] [jkirsher/next-queue PATCH 16/16] fm10k: Fix configuration for macvlan offload
  2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 16/16] fm10k: Fix configuration for macvlan offload Alexander Duyck
@ 2018-01-09 20:40   ` Singh, Krishneil K
  0 siblings, 0 replies; 33+ messages in thread
From: Singh, Krishneil K @ 2018-01-09 20:40 UTC (permalink / raw)
  To: intel-wired-lan


> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On Behalf
> Of Alexander Duyck
> Sent: Wednesday, November 22, 2017 10:58 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [jkirsher/next-queue PATCH 16/16] fm10k: Fix
> configuration for macvlan offload
> 
> From: Alexander Duyck <alexander.h.duyck@intel.com>
> 
> The fm10k driver didn't work correctly when macvlan offload was enabled.
> Specifically what would occur is that we would see no unicast packets being
> received. This was traced down to us not correctly configuring the default
> VLAN ID for the port and defaulting to 0.
> 
> To correct this we either use the default ID provided by the switch or
> simply use 1. With that we are able to pass and receive traffic without any
> issues.
> 
> In addition we were not repopulating the filter table following a reset. To
> correct that I have added a bit of code to fm10k_restore_rx_state that will
> repopulate the Rx filter configuration for the macvlan interfaces.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
>  drivers/net/ethernet/intel/fm10k/fm10k_netdev.c |   25
> ++++++++++++++++++++---
>  1 file changed, 22 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c
> b/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c
> index adc62fb38c49..6d9088956407 100644
> --- a/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c
> +++ b/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c
> @@ -1182,9 +1182,10 @@ static void fm10k_set_rx_mode(struct net_device
> *dev)
> 
>  void fm10k_restore_rx_state(struct fm10k_intfc *interface)
>  {
> +	struct fm10k_l2_accel *l2_accel = interface->l2_accel;
>  	struct net_device *netdev = interface->netdev;
>  	struct fm10k_hw *hw = &interface->hw;
> -	int xcast_mode;
> +	int xcast_mode, i;
>  	u16 vid, glort;
> 
>  	/* record glort for this interface */
> @@ -1234,6 +1235,24 @@ void fm10k_restore_rx_state(struct fm10k_intfc
> *interface)
>  	__dev_uc_sync(netdev, fm10k_uc_sync, fm10k_uc_unsync);
>  	__dev_mc_sync(netdev, fm10k_mc_sync, fm10k_mc_unsync);
> 
> +	/* synchronize macvlan addresses */
> +	if (l2_accel) {
> +		for (i = 0; i < l2_accel->size; i++) {
> +			struct net_device *sdev = l2_accel->macvlan[i];
> +
> +			if (!sdev)
> +				continue;
> +
> +			glort = l2_accel->dglort + 1 + i;
> +
> +			hw->mac.ops.update_xcast_mode(hw, glort,
> +
> FM10K_XCAST_MODE_MULTI);
> +			fm10k_queue_mac_request(interface, glort,
> +						sdev->dev_addr,
> +						hw->mac.default_vid, true);
> +		}
> +	}
> +
>  	fm10k_mbx_unlock(interface);
> 
>  	/* record updated xcast mode state */
> @@ -1490,7 +1509,7 @@ static void *fm10k_dfwd_add_station(struct
> net_device *dev,
>  		hw->mac.ops.update_xcast_mode(hw, glort,
>  					      FM10K_XCAST_MODE_MULTI);
>  		fm10k_queue_mac_request(interface, glort, sdev->dev_addr,
> -					0, true);
> +					hw->mac.default_vid, true);
>  	}
> 
>  	fm10k_mbx_unlock(interface);
> @@ -1530,7 +1549,7 @@ static void fm10k_dfwd_del_station(struct
> net_device *dev, void *priv)
>  		hw->mac.ops.update_xcast_mode(hw, glort,
>  					      FM10K_XCAST_MODE_NONE);
>  		fm10k_queue_mac_request(interface, glort, sdev->dev_addr,
> -					0, false);
> +					hw->mac.default_vid, false);
>  	}
> 
>  	fm10k_mbx_unlock(interface);
> 
> _______________________________________________
> Intel-wired-lan mailing list
> Intel-wired-lan at osuosl.org
> https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

Tested-by: Krishneil Singh <krishneil.k.singh@intel.com>



^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2018-01-09 20:40 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-22 18:56 [Intel-wired-lan] [jkirsher/next-queue PATCH 00/16] ixgbe/fm10k: macvlan fixes Alexander Duyck
2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 01/16] ixgbe: Fix interaction between SR-IOV and macvlan offload Alexander Duyck
2017-11-29 16:40   ` Bowers, AndrewX
2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 02/16] ixgbe: Perform reinit any time number of VFs change Alexander Duyck
2017-11-29 17:00   ` Bowers, AndrewX
2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 03/16] ixgbe: Add support for macvlan offload RSS on X550 and clean-up pool handling Alexander Duyck
2017-11-29 17:01   ` Bowers, AndrewX
2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 04/16] ixgbe: There is no need to update num_rx_pools in L2 fwd offload Alexander Duyck
2017-11-29 17:02   ` Bowers, AndrewX
2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 05/16] ixgbe: Fix limitations on macvlan so we can support up to 63 offloaded devices Alexander Duyck
2017-11-29 17:03   ` Bowers, AndrewX
2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 06/16] ixgbe: Use ring values to test for Tx pending Alexander Duyck
2017-11-29 17:04   ` Bowers, AndrewX
2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 07/16] ixgbe: Drop l2_accel_priv data pointer from ring struct Alexander Duyck
2017-11-29 17:04   ` Bowers, AndrewX
2017-11-22 18:56 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 08/16] ixgbe: Assume provided MAC filter has been verified by macvlan Alexander Duyck
2017-11-29 17:06   ` Bowers, AndrewX
2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 09/16] ixgbe: Default to 1 pool always being allocated Alexander Duyck
2017-11-29 17:07   ` Bowers, AndrewX
2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 10/16] ixgbe: Don't assume dev->num_tc is equal to hardware TC config Alexander Duyck
2017-11-29 17:08   ` Bowers, AndrewX
2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 11/16] ixgbe/fm10k: Record macvlan stats instead of Rx queue for macvlan offloaded rings Alexander Duyck
2017-11-29 17:08   ` Bowers, AndrewX
2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 12/16] ixgbe: Do not manipulate macvlan Tx queues when performing macvlan offload Alexander Duyck
2017-11-29 17:09   ` Bowers, AndrewX
2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 13/16] ixgbe: avoid bringing rings up/down as macvlans are added/removed Alexander Duyck
2017-11-29 17:09   ` Bowers, AndrewX
2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 14/16] ixgbe: Fix handling of macvlan Tx offload Alexander Duyck
2017-11-29 17:10   ` Bowers, AndrewX
2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 15/16] net: Cap number of queues even with accel_priv Alexander Duyck
2017-11-29 17:11   ` Bowers, AndrewX
2017-11-22 18:57 ` [Intel-wired-lan] [jkirsher/next-queue PATCH 16/16] fm10k: Fix configuration for macvlan offload Alexander Duyck
2018-01-09 20:40   ` Singh, Krishneil K

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.