All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-wired-lan] [next PATCH 0/3] Add support for 4 queues VFs with 1 or 2 queue RSS on PF
@ 2016-09-08  3:28 Alexander Duyck
  2016-09-08  3:28 ` [Intel-wired-lan] [next PATCH 1/3] ixgbe: Allow setting multiple queues when SR-IOV is enabled Alexander Duyck
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Alexander Duyck @ 2016-09-08  3:28 UTC (permalink / raw)
  To: intel-wired-lan

This patch set addresses a few minor issues and allows us to enable 4 queue
RSS on the VFs if the PF if configured for less than 4 queues.  I found and
fixed at least 2 buts with the first 2 patches in this set, and the third
patch enables us to have 4 queues with RSS assuming that we have not
exceeded 32 total VM pools and DCB is not enabled.

---

Alexander Duyck (3):
      ixgbe: Allow setting multiple queues when SR-IOV is enabled
      ixgbe: Limit reporting of redirection table if SR-IOV is enabled
      ixgbe: Support 4 queue RSS on VFs with 1 or 2 queue RSS on PF


 drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c |   10 +++++++---
 drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c     |    7 ++++---
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c    |   12 +++++++-----
 3 files changed, 18 insertions(+), 11 deletions(-)

--

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Intel-wired-lan] [next PATCH 1/3] ixgbe: Allow setting multiple queues when SR-IOV is enabled
  2016-09-08  3:28 [Intel-wired-lan] [next PATCH 0/3] Add support for 4 queues VFs with 1 or 2 queue RSS on PF Alexander Duyck
@ 2016-09-08  3:28 ` Alexander Duyck
  2016-09-09 18:43   ` Bowers, AndrewX
  2016-09-08  3:28 ` [Intel-wired-lan] [next PATCH 2/3] ixgbe: Limit reporting of redirection table if " Alexander Duyck
  2016-09-08  3:28 ` [Intel-wired-lan] [next PATCH 3/3] ixgbe: Support 4 queue RSS on VFs with 1 or 2 queue RSS on PF Alexander Duyck
  2 siblings, 1 reply; 12+ messages in thread
From: Alexander Duyck @ 2016-09-08  3:28 UTC (permalink / raw)
  To: intel-wired-lan

From: Alexander Duyck <alexander.h.duyck@intel.com>

The maximum queue count reported was 1, however support for multiple queues
with SR-IOV was added some time ago so we should report support for it to
the user so that they can select multiple queues if they so desire.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
index 730a99f0f002..2d872be336bb 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
@@ -3060,8 +3060,8 @@ static unsigned int ixgbe_max_channels(struct ixgbe_adapter *adapter)
 		/* We only support one q_vector without MSI-X */
 		max_combined = 1;
 	} else if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) {
-		/* SR-IOV currently only allows one queue on the PF */
-		max_combined = 1;
+		/* Limit value based on the queue mask */
+		max_combined = adapter->ring_feature[RING_F_RSS].mask + 1;
 	} else if (tcs > 1) {
 		/* For DCB report channels per traffic class */
 		if (adapter->hw.mac.type == ixgbe_mac_82598EB) {


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [Intel-wired-lan] [next PATCH 2/3] ixgbe: Limit reporting of redirection table if SR-IOV is enabled
  2016-09-08  3:28 [Intel-wired-lan] [next PATCH 0/3] Add support for 4 queues VFs with 1 or 2 queue RSS on PF Alexander Duyck
  2016-09-08  3:28 ` [Intel-wired-lan] [next PATCH 1/3] ixgbe: Allow setting multiple queues when SR-IOV is enabled Alexander Duyck
@ 2016-09-08  3:28 ` Alexander Duyck
  2016-09-09 18:44   ` Bowers, AndrewX
  2016-09-08  3:28 ` [Intel-wired-lan] [next PATCH 3/3] ixgbe: Support 4 queue RSS on VFs with 1 or 2 queue RSS on PF Alexander Duyck
  2 siblings, 1 reply; 12+ messages in thread
From: Alexander Duyck @ 2016-09-08  3:28 UTC (permalink / raw)
  To: intel-wired-lan

From: Alexander Duyck <alexander.h.duyck@intel.com>

The hardware redirection table can support more queues then the PF
currently has when SR-IOV is enabled.  In order to account for this use the
RSS mask to trim of the bits that are not used.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c |    6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
index 2d872be336bb..f49f80380aa5 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
@@ -2947,9 +2947,13 @@ static u32 ixgbe_rss_indir_size(struct net_device *netdev)
 static void ixgbe_get_reta(struct ixgbe_adapter *adapter, u32 *indir)
 {
 	int i, reta_size = ixgbe_rss_indir_tbl_entries(adapter);
+	u16 rss_m = adapter->ring_feature[RING_F_RSS].mask;
+
+	if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED)
+		rss_m = adapter->ring_feature[RING_F_RSS].indices - 1;
 
 	for (i = 0; i < reta_size; i++)
-		indir[i] = adapter->rss_indir_tbl[i];
+		indir[i] = adapter->rss_indir_tbl[i] & rss_m;
 }
 
 static int ixgbe_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key,


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [Intel-wired-lan] [next PATCH 3/3] ixgbe: Support 4 queue RSS on VFs with 1 or 2 queue RSS on PF
  2016-09-08  3:28 [Intel-wired-lan] [next PATCH 0/3] Add support for 4 queues VFs with 1 or 2 queue RSS on PF Alexander Duyck
  2016-09-08  3:28 ` [Intel-wired-lan] [next PATCH 1/3] ixgbe: Allow setting multiple queues when SR-IOV is enabled Alexander Duyck
  2016-09-08  3:28 ` [Intel-wired-lan] [next PATCH 2/3] ixgbe: Limit reporting of redirection table if " Alexander Duyck
@ 2016-09-08  3:28 ` Alexander Duyck
  2016-09-08 18:20   ` Ruslan Nikolaev
  2016-09-09 18:46   ` Bowers, AndrewX
  2 siblings, 2 replies; 12+ messages in thread
From: Alexander Duyck @ 2016-09-08  3:28 UTC (permalink / raw)
  To: intel-wired-lan

From: Alexander Duyck <alexander.h.duyck@intel.com>

Instead of limiting the VFs if we don't use 4 queues for RSS in the PF we
can instead just limit the RSS queues used to a power of 2.  By doing this
we can support use cases where VFs are using more queues than the PF is
currently using and can support RSS if so desired.

The only limitation on this is that we cannot support 3 queues of RSS in
the PF or VF.  In either of these cases we should fall back to 2 queues in
order to be able to use the power of 2 masking provided by the psrtype
register.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  |    7 ++++---
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   12 +++++++-----
 2 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index bcdc88444ceb..15ab337fd7ad 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -515,15 +515,16 @@ static bool ixgbe_set_sriov_queues(struct ixgbe_adapter *adapter)
 	vmdq_i = min_t(u16, IXGBE_MAX_VMDQ_INDICES, vmdq_i);
 
 	/* 64 pool mode with 2 queues per pool */
-	if ((vmdq_i > 32) || (rss_i < 4) || (vmdq_i > 16 && pools)) {
+	if ((vmdq_i > 32) || (vmdq_i > 16 && pools)) {
 		vmdq_m = IXGBE_82599_VMDQ_2Q_MASK;
 		rss_m = IXGBE_RSS_2Q_MASK;
 		rss_i = min_t(u16, rss_i, 2);
-	/* 32 pool mode with 4 queues per pool */
+	/* 32 pool mode with up to 4 queues per pool */
 	} else {
 		vmdq_m = IXGBE_82599_VMDQ_4Q_MASK;
 		rss_m = IXGBE_RSS_4Q_MASK;
-		rss_i = 4;
+		/* We can support 4, 2, or 1 queues */
+		rss_i = (rss_i > 3) ? 4 : (rss_i > 1) ? 2 : 1;
 	}
 
 #ifdef IXGBE_FCOE
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 1c888588cecd..a244d9a67264 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -3248,7 +3248,8 @@ static void ixgbe_setup_mtqc(struct ixgbe_adapter *adapter)
 			mtqc |= IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ;
 		else if (tcs > 1)
 			mtqc |= IXGBE_MTQC_RT_ENA | IXGBE_MTQC_4TC_4TQ;
-		else if (adapter->ring_feature[RING_F_RSS].indices == 4)
+		else if (adapter->ring_feature[RING_F_VMDQ].mask ==
+			 IXGBE_82599_VMDQ_4Q_MASK)
 			mtqc |= IXGBE_MTQC_32VF;
 		else
 			mtqc |= IXGBE_MTQC_64VF;
@@ -3475,12 +3476,12 @@ static void ixgbe_setup_reta(struct ixgbe_adapter *adapter)
 	u32 reta_entries = ixgbe_rss_indir_tbl_entries(adapter);
 	u16 rss_i = adapter->ring_feature[RING_F_RSS].indices;
 
-	/* Program table for at least 2 queues w/ SR-IOV so that VFs can
+	/* Program table for at least 4 queues w/ SR-IOV so that VFs can
 	 * make full use of any rings they may have.  We will use the
 	 * PSRTYPE register to control how many rings we use within the PF.
 	 */
-	if ((adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) && (rss_i < 2))
-		rss_i = 2;
+	if ((adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) && (rss_i < 4))
+		rss_i = 4;
 
 	/* Fill out hash function seeds */
 	for (i = 0; i < 10; i++)
@@ -3544,7 +3545,8 @@ static void ixgbe_setup_mrqc(struct ixgbe_adapter *adapter)
 				mrqc = IXGBE_MRQC_VMDQRT8TCEN;	/* 8 TCs */
 			else if (tcs > 1)
 				mrqc = IXGBE_MRQC_VMDQRT4TCEN;	/* 4 TCs */
-			else if (adapter->ring_feature[RING_F_RSS].indices == 4)
+			else if (adapter->ring_feature[RING_F_VMDQ].mask ==
+				 IXGBE_82599_VMDQ_4Q_MASK)
 				mrqc = IXGBE_MRQC_VMDQRSS32EN;
 			else
 				mrqc = IXGBE_MRQC_VMDQRSS64EN;


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [Intel-wired-lan] [next PATCH 3/3] ixgbe: Support 4 queue RSS on VFs with 1 or 2 queue RSS on PF
  2016-09-08  3:28 ` [Intel-wired-lan] [next PATCH 3/3] ixgbe: Support 4 queue RSS on VFs with 1 or 2 queue RSS on PF Alexander Duyck
@ 2016-09-08 18:20   ` Ruslan Nikolaev
  2016-09-09  1:14     ` Ruslan Nikolaev
  2016-09-09 18:46   ` Bowers, AndrewX
  1 sibling, 1 reply; 12+ messages in thread
From: Ruslan Nikolaev @ 2016-09-08 18:20 UTC (permalink / raw)
  To: intel-wired-lan

Thank you very much for giving the feedback and creating new set of patches!

Ruslan

On Sep 7, 2016, at 8:28 PM, Alexander Duyck <alexander.duyck@gmail.com> wrote:

> From: Alexander Duyck <alexander.h.duyck@intel.com>
> 
> Instead of limiting the VFs if we don't use 4 queues for RSS in the PF we
> can instead just limit the RSS queues used to a power of 2.  By doing this
> we can support use cases where VFs are using more queues than the PF is
> currently using and can support RSS if so desired.
> 
> The only limitation on this is that we cannot support 3 queues of RSS in
> the PF or VF.  In either of these cases we should fall back to 2 queues in
> order to be able to use the power of 2 masking provided by the psrtype
> register.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
> drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  |    7 ++++---
> drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   12 +++++++-----
> 2 files changed, 11 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> index bcdc88444ceb..15ab337fd7ad 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> @@ -515,15 +515,16 @@ static bool ixgbe_set_sriov_queues(struct ixgbe_adapter *adapter)
>  	vmdq_i = min_t(u16, IXGBE_MAX_VMDQ_INDICES, vmdq_i);
> 
>  	/* 64 pool mode with 2 queues per pool */
> -	if ((vmdq_i > 32) || (rss_i < 4) || (vmdq_i > 16 && pools)) {
> +	if ((vmdq_i > 32) || (vmdq_i > 16 && pools)) {
>  		vmdq_m = IXGBE_82599_VMDQ_2Q_MASK;
>  		rss_m = IXGBE_RSS_2Q_MASK;
>  		rss_i = min_t(u16, rss_i, 2);
> -	/* 32 pool mode with 4 queues per pool */
> +	/* 32 pool mode with up to 4 queues per pool */
>  	} else {
>  		vmdq_m = IXGBE_82599_VMDQ_4Q_MASK;
>  		rss_m = IXGBE_RSS_4Q_MASK;
> -		rss_i = 4;
> +		/* We can support 4, 2, or 1 queues */
> +		rss_i = (rss_i > 3) ? 4 : (rss_i > 1) ? 2 : 1;
>  	}
> 
> #ifdef IXGBE_FCOE
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> index 1c888588cecd..a244d9a67264 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> @@ -3248,7 +3248,8 @@ static void ixgbe_setup_mtqc(struct ixgbe_adapter *adapter)
>  			mtqc |= IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ;
>  		else if (tcs > 1)
>  			mtqc |= IXGBE_MTQC_RT_ENA | IXGBE_MTQC_4TC_4TQ;
> -		else if (adapter->ring_feature[RING_F_RSS].indices == 4)
> +		else if (adapter->ring_feature[RING_F_VMDQ].mask ==
> +			 IXGBE_82599_VMDQ_4Q_MASK)
>  			mtqc |= IXGBE_MTQC_32VF;
>  		else
>  			mtqc |= IXGBE_MTQC_64VF;
> @@ -3475,12 +3476,12 @@ static void ixgbe_setup_reta(struct ixgbe_adapter *adapter)
>  	u32 reta_entries = ixgbe_rss_indir_tbl_entries(adapter);
>  	u16 rss_i = adapter->ring_feature[RING_F_RSS].indices;
> 
> -	/* Program table for at least 2 queues w/ SR-IOV so that VFs can
> +	/* Program table for at least 4 queues w/ SR-IOV so that VFs can
>  	 * make full use of any rings they may have.  We will use the
>  	 * PSRTYPE register to control how many rings we use within the PF.
>  	 */
> -	if ((adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) && (rss_i < 2))
> -		rss_i = 2;
> +	if ((adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) && (rss_i < 4))
> +		rss_i = 4;
> 
>  	/* Fill out hash function seeds */
>  	for (i = 0; i < 10; i++)
> @@ -3544,7 +3545,8 @@ static void ixgbe_setup_mrqc(struct ixgbe_adapter *adapter)
>  				mrqc = IXGBE_MRQC_VMDQRT8TCEN;	/* 8 TCs */
>  			else if (tcs > 1)
>  				mrqc = IXGBE_MRQC_VMDQRT4TCEN;	/* 4 TCs */
> -			else if (adapter->ring_feature[RING_F_RSS].indices == 4)
> +			else if (adapter->ring_feature[RING_F_VMDQ].mask ==
> +				 IXGBE_82599_VMDQ_4Q_MASK)
>  				mrqc = IXGBE_MRQC_VMDQRSS32EN;
>  			else
>  				mrqc = IXGBE_MRQC_VMDQRSS64EN;
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Intel-wired-lan] [next PATCH 3/3] ixgbe: Support 4 queue RSS on VFs with 1 or 2 queue RSS on PF
  2016-09-08 18:20   ` Ruslan Nikolaev
@ 2016-09-09  1:14     ` Ruslan Nikolaev
  2016-09-09 15:32       ` Alexander Duyck
  0 siblings, 1 reply; 12+ messages in thread
From: Ruslan Nikolaev @ 2016-09-09  1:14 UTC (permalink / raw)
  To: intel-wired-lan

I still have on more question. There is also ixgbe_setup_vfreta function in the code. Do we need to adjust rss_i there as well?

Thanks,
Ruslan

On Sep 8, 2016, at 11:20 AM, Ruslan Nikolaev <ruslan@purestorage.com> wrote:

> Thank you very much for giving the feedback and creating new set of patches!
> 
> Ruslan
> 
> On Sep 7, 2016, at 8:28 PM, Alexander Duyck <alexander.duyck@gmail.com> wrote:
> 
>> From: Alexander Duyck <alexander.h.duyck@intel.com>
>> 
>> Instead of limiting the VFs if we don't use 4 queues for RSS in the PF we
>> can instead just limit the RSS queues used to a power of 2.  By doing this
>> we can support use cases where VFs are using more queues than the PF is
>> currently using and can support RSS if so desired.
>> 
>> The only limitation on this is that we cannot support 3 queues of RSS in
>> the PF or VF.  In either of these cases we should fall back to 2 queues in
>> order to be able to use the power of 2 masking provided by the psrtype
>> register.
>> 
>> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
>> ---
>> drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  |    7 ++++---
>> drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   12 +++++++-----
>> 2 files changed, 11 insertions(+), 8 deletions(-)
>> 
>> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
>> index bcdc88444ceb..15ab337fd7ad 100644
>> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
>> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
>> @@ -515,15 +515,16 @@ static bool ixgbe_set_sriov_queues(struct ixgbe_adapter *adapter)
>> 	vmdq_i = min_t(u16, IXGBE_MAX_VMDQ_INDICES, vmdq_i);
>> 
>> 	/* 64 pool mode with 2 queues per pool */
>> -	if ((vmdq_i > 32) || (rss_i < 4) || (vmdq_i > 16 && pools)) {
>> +	if ((vmdq_i > 32) || (vmdq_i > 16 && pools)) {
>> 		vmdq_m = IXGBE_82599_VMDQ_2Q_MASK;
>> 		rss_m = IXGBE_RSS_2Q_MASK;
>> 		rss_i = min_t(u16, rss_i, 2);
>> -	/* 32 pool mode with 4 queues per pool */
>> +	/* 32 pool mode with up to 4 queues per pool */
>> 	} else {
>> 		vmdq_m = IXGBE_82599_VMDQ_4Q_MASK;
>> 		rss_m = IXGBE_RSS_4Q_MASK;
>> -		rss_i = 4;
>> +		/* We can support 4, 2, or 1 queues */
>> +		rss_i = (rss_i > 3) ? 4 : (rss_i > 1) ? 2 : 1;
>> 	}
>> 
>> #ifdef IXGBE_FCOE
>> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
>> index 1c888588cecd..a244d9a67264 100644
>> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
>> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
>> @@ -3248,7 +3248,8 @@ static void ixgbe_setup_mtqc(struct ixgbe_adapter *adapter)
>> 			mtqc |= IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ;
>> 		else if (tcs > 1)
>> 			mtqc |= IXGBE_MTQC_RT_ENA | IXGBE_MTQC_4TC_4TQ;
>> -		else if (adapter->ring_feature[RING_F_RSS].indices == 4)
>> +		else if (adapter->ring_feature[RING_F_VMDQ].mask ==
>> +			 IXGBE_82599_VMDQ_4Q_MASK)
>> 			mtqc |= IXGBE_MTQC_32VF;
>> 		else
>> 			mtqc |= IXGBE_MTQC_64VF;
>> @@ -3475,12 +3476,12 @@ static void ixgbe_setup_reta(struct ixgbe_adapter *adapter)
>> 	u32 reta_entries = ixgbe_rss_indir_tbl_entries(adapter);
>> 	u16 rss_i = adapter->ring_feature[RING_F_RSS].indices;
>> 
>> -	/* Program table for at least 2 queues w/ SR-IOV so that VFs can
>> +	/* Program table for at least 4 queues w/ SR-IOV so that VFs can
>> 	 * make full use of any rings they may have.  We will use the
>> 	 * PSRTYPE register to control how many rings we use within the PF.
>> 	 */
>> -	if ((adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) && (rss_i < 2))
>> -		rss_i = 2;
>> +	if ((adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) && (rss_i < 4))
>> +		rss_i = 4;
>> 
>> 	/* Fill out hash function seeds */
>> 	for (i = 0; i < 10; i++)
>> @@ -3544,7 +3545,8 @@ static void ixgbe_setup_mrqc(struct ixgbe_adapter *adapter)
>> 				mrqc = IXGBE_MRQC_VMDQRT8TCEN;	/* 8 TCs */
>> 			else if (tcs > 1)
>> 				mrqc = IXGBE_MRQC_VMDQRT4TCEN;	/* 4 TCs */
>> -			else if (adapter->ring_feature[RING_F_RSS].indices == 4)
>> +			else if (adapter->ring_feature[RING_F_VMDQ].mask ==
>> +				 IXGBE_82599_VMDQ_4Q_MASK)
>> 				mrqc = IXGBE_MRQC_VMDQRSS32EN;
>> 			else
>> 				mrqc = IXGBE_MRQC_VMDQRSS64EN;
>> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osuosl.org/pipermail/intel-wired-lan/attachments/20160908/07ad9b96/attachment-0001.html>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Intel-wired-lan] [next PATCH 3/3] ixgbe: Support 4 queue RSS on VFs with 1 or 2 queue RSS on PF
  2016-09-09  1:14     ` Ruslan Nikolaev
@ 2016-09-09 15:32       ` Alexander Duyck
  2016-09-09 21:00         ` Ruslan Nikolaev
  0 siblings, 1 reply; 12+ messages in thread
From: Alexander Duyck @ 2016-09-09 15:32 UTC (permalink / raw)
  To: intel-wired-lan

That shouldn't be needed since the VFRETA is actually per virtual pool
so it isn't shared with the VFs.  In theory we could support 3 queues
on the X550 without any issues, but that would be a change of how
things are currently handled so I figured I will leave it as it is for
now.

- Alex

On Thu, Sep 8, 2016 at 6:14 PM, Ruslan Nikolaev <ruslan@purestorage.com> wrote:
> I still have on more question. There is also ixgbe_setup_vfreta function in
> the code. Do we need to adjust rss_i there as well?
>
> Thanks,
> Ruslan
>
> On Sep 8, 2016, at 11:20 AM, Ruslan Nikolaev <ruslan@purestorage.com> wrote:
>
> Thank you very much for giving the feedback and creating new set of patches!
>
> Ruslan
>
> On Sep 7, 2016, at 8:28 PM, Alexander Duyck <alexander.duyck@gmail.com>
> wrote:
>
> From: Alexander Duyck <alexander.h.duyck@intel.com>
>
> Instead of limiting the VFs if we don't use 4 queues for RSS in the PF we
> can instead just limit the RSS queues used to a power of 2.  By doing this
> we can support use cases where VFs are using more queues than the PF is
> currently using and can support RSS if so desired.
>
> The only limitation on this is that we cannot support 3 queues of RSS in
> the PF or VF.  In either of these cases we should fall back to 2 queues in
> order to be able to use the power of 2 masking provided by the psrtype
> register.
>
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
> drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  |    7 ++++---
> drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   12 +++++++-----
> 2 files changed, 11 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> index bcdc88444ceb..15ab337fd7ad 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> @@ -515,15 +515,16 @@ static bool ixgbe_set_sriov_queues(struct
> ixgbe_adapter *adapter)
> vmdq_i = min_t(u16, IXGBE_MAX_VMDQ_INDICES, vmdq_i);
>
> /* 64 pool mode with 2 queues per pool */
> - if ((vmdq_i > 32) || (rss_i < 4) || (vmdq_i > 16 && pools)) {
> + if ((vmdq_i > 32) || (vmdq_i > 16 && pools)) {
> vmdq_m = IXGBE_82599_VMDQ_2Q_MASK;
> rss_m = IXGBE_RSS_2Q_MASK;
> rss_i = min_t(u16, rss_i, 2);
> - /* 32 pool mode with 4 queues per pool */
> + /* 32 pool mode with up to 4 queues per pool */
> } else {
> vmdq_m = IXGBE_82599_VMDQ_4Q_MASK;
> rss_m = IXGBE_RSS_4Q_MASK;
> - rss_i = 4;
> + /* We can support 4, 2, or 1 queues */
> + rss_i = (rss_i > 3) ? 4 : (rss_i > 1) ? 2 : 1;
> }
>
> #ifdef IXGBE_FCOE
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> index 1c888588cecd..a244d9a67264 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> @@ -3248,7 +3248,8 @@ static void ixgbe_setup_mtqc(struct ixgbe_adapter
> *adapter)
> mtqc |= IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ;
> else if (tcs > 1)
> mtqc |= IXGBE_MTQC_RT_ENA | IXGBE_MTQC_4TC_4TQ;
> - else if (adapter->ring_feature[RING_F_RSS].indices == 4)
> + else if (adapter->ring_feature[RING_F_VMDQ].mask ==
> + IXGBE_82599_VMDQ_4Q_MASK)
> mtqc |= IXGBE_MTQC_32VF;
> else
> mtqc |= IXGBE_MTQC_64VF;
> @@ -3475,12 +3476,12 @@ static void ixgbe_setup_reta(struct ixgbe_adapter
> *adapter)
> u32 reta_entries = ixgbe_rss_indir_tbl_entries(adapter);
> u16 rss_i = adapter->ring_feature[RING_F_RSS].indices;
>
> - /* Program table for at least 2 queues w/ SR-IOV so that VFs can
> + /* Program table for at least 4 queues w/ SR-IOV so that VFs can
> * make full use of any rings they may have.  We will use the
> * PSRTYPE register to control how many rings we use within the PF.
> */
> - if ((adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) && (rss_i < 2))
> - rss_i = 2;
> + if ((adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) && (rss_i < 4))
> + rss_i = 4;
>
> /* Fill out hash function seeds */
> for (i = 0; i < 10; i++)
> @@ -3544,7 +3545,8 @@ static void ixgbe_setup_mrqc(struct ixgbe_adapter
> *adapter)
> mrqc = IXGBE_MRQC_VMDQRT8TCEN; /* 8 TCs */
> else if (tcs > 1)
> mrqc = IXGBE_MRQC_VMDQRT4TCEN; /* 4 TCs */
> - else if (adapter->ring_feature[RING_F_RSS].indices == 4)
> + else if (adapter->ring_feature[RING_F_VMDQ].mask ==
> + IXGBE_82599_VMDQ_4Q_MASK)
> mrqc = IXGBE_MRQC_VMDQRSS32EN;
> else
> mrqc = IXGBE_MRQC_VMDQRSS64EN;
>
>
>
>
> _______________________________________________
> Intel-wired-lan mailing list
> Intel-wired-lan at lists.osuosl.org
> http://lists.osuosl.org/mailman/listinfo/intel-wired-lan
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Intel-wired-lan] [next PATCH 1/3] ixgbe: Allow setting multiple queues when SR-IOV is enabled
  2016-09-08  3:28 ` [Intel-wired-lan] [next PATCH 1/3] ixgbe: Allow setting multiple queues when SR-IOV is enabled Alexander Duyck
@ 2016-09-09 18:43   ` Bowers, AndrewX
  0 siblings, 0 replies; 12+ messages in thread
From: Bowers, AndrewX @ 2016-09-09 18:43 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at lists.osuosl.org] On
> Behalf Of Alexander Duyck
> Sent: Wednesday, September 07, 2016 8:28 PM
> To: intel-wired-lan at lists.osuosl.org
> Cc: ruslan at purestorage.com; Jayakumar, Muthurajan
> <muthurajan.jayakumar@intel.com>; Blevins, Christopher R
> <christopher.r.blevins@intel.com>
> Subject: [Intel-wired-lan] [next PATCH 1/3] ixgbe: Allow setting multiple
> queues when SR-IOV is enabled
> 
> From: Alexander Duyck <alexander.h.duyck@intel.com>
> 
> The maximum queue count reported was 1, however support for multiple
> queues with SR-IOV was added some time ago so we should report support
> for it to the user so that they can select multiple queues if they so desire.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Intel-wired-lan] [next PATCH 2/3] ixgbe: Limit reporting of redirection table if SR-IOV is enabled
  2016-09-08  3:28 ` [Intel-wired-lan] [next PATCH 2/3] ixgbe: Limit reporting of redirection table if " Alexander Duyck
@ 2016-09-09 18:44   ` Bowers, AndrewX
  0 siblings, 0 replies; 12+ messages in thread
From: Bowers, AndrewX @ 2016-09-09 18:44 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at lists.osuosl.org] On
> Behalf Of Alexander Duyck
> Sent: Wednesday, September 07, 2016 8:28 PM
> To: intel-wired-lan at lists.osuosl.org
> Cc: ruslan at purestorage.com; Jayakumar, Muthurajan
> <muthurajan.jayakumar@intel.com>; Blevins, Christopher R
> <christopher.r.blevins@intel.com>
> Subject: [Intel-wired-lan] [next PATCH 2/3] ixgbe: Limit reporting of
> redirection table if SR-IOV is enabled
> 
> From: Alexander Duyck <alexander.h.duyck@intel.com>
> 
> The hardware redirection table can support more queues then the PF
> currently has when SR-IOV is enabled.  In order to account for this use the
> RSS mask to trim of the bits that are not used.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c |    6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Intel-wired-lan] [next PATCH 3/3] ixgbe: Support 4 queue RSS on VFs with 1 or 2 queue RSS on PF
  2016-09-08  3:28 ` [Intel-wired-lan] [next PATCH 3/3] ixgbe: Support 4 queue RSS on VFs with 1 or 2 queue RSS on PF Alexander Duyck
  2016-09-08 18:20   ` Ruslan Nikolaev
@ 2016-09-09 18:46   ` Bowers, AndrewX
  1 sibling, 0 replies; 12+ messages in thread
From: Bowers, AndrewX @ 2016-09-09 18:46 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at lists.osuosl.org] On
> Behalf Of Alexander Duyck
> Sent: Wednesday, September 07, 2016 8:28 PM
> To: intel-wired-lan at lists.osuosl.org
> Cc: ruslan at purestorage.com; Jayakumar, Muthurajan
> <muthurajan.jayakumar@intel.com>; Blevins, Christopher R
> <christopher.r.blevins@intel.com>
> Subject: [Intel-wired-lan] [next PATCH 3/3] ixgbe: Support 4 queue RSS on
> VFs with 1 or 2 queue RSS on PF
> 
> From: Alexander Duyck <alexander.h.duyck@intel.com>
> 
> Instead of limiting the VFs if we don't use 4 queues for RSS in the PF we can
> instead just limit the RSS queues used to a power of 2.  By doing this we can
> support use cases where VFs are using more queues than the PF is currently
> using and can support RSS if so desired.
> 
> The only limitation on this is that we cannot support 3 queues of RSS in the PF
> or VF.  In either of these cases we should fall back to 2 queues in order to be
> able to use the power of 2 masking provided by the psrtype register.
> 
> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  |    7 ++++---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   12 +++++++-----
>  2 files changed, 11 insertions(+), 8 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Intel-wired-lan] [next PATCH 3/3] ixgbe: Support 4 queue RSS on VFs with 1 or 2 queue RSS on PF
  2016-09-09 15:32       ` Alexander Duyck
@ 2016-09-09 21:00         ` Ruslan Nikolaev
  2016-09-10 17:40           ` Alexander Duyck
  0 siblings, 1 reply; 12+ messages in thread
From: Ruslan Nikolaev @ 2016-09-09 21:00 UTC (permalink / raw)
  To: intel-wired-lan

Sorry in advance if I have any misconception regarding how VFRETA works. But I was more thinking about rss_i being smaller than needed. For instance, rss_i is 1 (because RSS=1 for PF) but we use VFs with larger number of RX queues (RSS=2 or RSS=4 for VF). Should we program the table for VFs using value 2 (or 4) that we at least support 2 RX queues (or 4 RX queues if interrupts are shared)? I guess, it does not matter when we use just one RX queue.

Thanks,
Ruslan

On Sep 9, 2016, at 8:32 AM, Alexander Duyck <alexander.duyck@gmail.com> wrote:

> That shouldn't be needed since the VFRETA is actually per virtual pool
> so it isn't shared with the VFs.  In theory we could support 3 queues
> on the X550 without any issues, but that would be a change of how
> things are currently handled so I figured I will leave it as it is for
> now.
> 
> - Alex
> 
> On Thu, Sep 8, 2016 at 6:14 PM, Ruslan Nikolaev <ruslan@purestorage.com> wrote:
>> I still have on more question. There is also ixgbe_setup_vfreta function in
>> the code. Do we need to adjust rss_i there as well?
>> 
>> Thanks,
>> Ruslan
>> 
>> On Sep 8, 2016, at 11:20 AM, Ruslan Nikolaev <ruslan@purestorage.com> wrote:
>> 
>> Thank you very much for giving the feedback and creating new set of patches!
>> 
>> Ruslan
>> 
>> On Sep 7, 2016, at 8:28 PM, Alexander Duyck <alexander.duyck@gmail.com>
>> wrote:
>> 
>> From: Alexander Duyck <alexander.h.duyck@intel.com>
>> 
>> Instead of limiting the VFs if we don't use 4 queues for RSS in the PF we
>> can instead just limit the RSS queues used to a power of 2.  By doing this
>> we can support use cases where VFs are using more queues than the PF is
>> currently using and can support RSS if so desired.
>> 
>> The only limitation on this is that we cannot support 3 queues of RSS in
>> the PF or VF.  In either of these cases we should fall back to 2 queues in
>> order to be able to use the power of 2 masking provided by the psrtype
>> register.
>> 
>> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
>> ---
>> drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  |    7 ++++---
>> drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   12 +++++++-----
>> 2 files changed, 11 insertions(+), 8 deletions(-)
>> 
>> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
>> b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
>> index bcdc88444ceb..15ab337fd7ad 100644
>> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
>> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
>> @@ -515,15 +515,16 @@ static bool ixgbe_set_sriov_queues(struct
>> ixgbe_adapter *adapter)
>> vmdq_i = min_t(u16, IXGBE_MAX_VMDQ_INDICES, vmdq_i);
>> 
>> /* 64 pool mode with 2 queues per pool */
>> - if ((vmdq_i > 32) || (rss_i < 4) || (vmdq_i > 16 && pools)) {
>> + if ((vmdq_i > 32) || (vmdq_i > 16 && pools)) {
>> vmdq_m = IXGBE_82599_VMDQ_2Q_MASK;
>> rss_m = IXGBE_RSS_2Q_MASK;
>> rss_i = min_t(u16, rss_i, 2);
>> - /* 32 pool mode with 4 queues per pool */
>> + /* 32 pool mode with up to 4 queues per pool */
>> } else {
>> vmdq_m = IXGBE_82599_VMDQ_4Q_MASK;
>> rss_m = IXGBE_RSS_4Q_MASK;
>> - rss_i = 4;
>> + /* We can support 4, 2, or 1 queues */
>> + rss_i = (rss_i > 3) ? 4 : (rss_i > 1) ? 2 : 1;
>> }
>> 
>> #ifdef IXGBE_FCOE
>> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
>> b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
>> index 1c888588cecd..a244d9a67264 100644
>> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
>> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
>> @@ -3248,7 +3248,8 @@ static void ixgbe_setup_mtqc(struct ixgbe_adapter
>> *adapter)
>> mtqc |= IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ;
>> else if (tcs > 1)
>> mtqc |= IXGBE_MTQC_RT_ENA | IXGBE_MTQC_4TC_4TQ;
>> - else if (adapter->ring_feature[RING_F_RSS].indices == 4)
>> + else if (adapter->ring_feature[RING_F_VMDQ].mask ==
>> + IXGBE_82599_VMDQ_4Q_MASK)
>> mtqc |= IXGBE_MTQC_32VF;
>> else
>> mtqc |= IXGBE_MTQC_64VF;
>> @@ -3475,12 +3476,12 @@ static void ixgbe_setup_reta(struct ixgbe_adapter
>> *adapter)
>> u32 reta_entries = ixgbe_rss_indir_tbl_entries(adapter);
>> u16 rss_i = adapter->ring_feature[RING_F_RSS].indices;
>> 
>> - /* Program table for at least 2 queues w/ SR-IOV so that VFs can
>> + /* Program table for at least 4 queues w/ SR-IOV so that VFs can
>> * make full use of any rings they may have.  We will use the
>> * PSRTYPE register to control how many rings we use within the PF.
>> */
>> - if ((adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) && (rss_i < 2))
>> - rss_i = 2;
>> + if ((adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) && (rss_i < 4))
>> + rss_i = 4;
>> 
>> /* Fill out hash function seeds */
>> for (i = 0; i < 10; i++)
>> @@ -3544,7 +3545,8 @@ static void ixgbe_setup_mrqc(struct ixgbe_adapter
>> *adapter)
>> mrqc = IXGBE_MRQC_VMDQRT8TCEN; /* 8 TCs */
>> else if (tcs > 1)
>> mrqc = IXGBE_MRQC_VMDQRT4TCEN; /* 4 TCs */
>> - else if (adapter->ring_feature[RING_F_RSS].indices == 4)
>> + else if (adapter->ring_feature[RING_F_VMDQ].mask ==
>> + IXGBE_82599_VMDQ_4Q_MASK)
>> mrqc = IXGBE_MRQC_VMDQRSS32EN;
>> else
>> mrqc = IXGBE_MRQC_VMDQRSS64EN;
>> 
>> 
>> 
>> 
>> _______________________________________________
>> Intel-wired-lan mailing list
>> Intel-wired-lan at lists.osuosl.org
>> http://lists.osuosl.org/mailman/listinfo/intel-wired-lan
>> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [Intel-wired-lan] [next PATCH 3/3] ixgbe: Support 4 queue RSS on VFs with 1 or 2 queue RSS on PF
  2016-09-09 21:00         ` Ruslan Nikolaev
@ 2016-09-10 17:40           ` Alexander Duyck
  0 siblings, 0 replies; 12+ messages in thread
From: Alexander Duyck @ 2016-09-10 17:40 UTC (permalink / raw)
  To: intel-wired-lan

The VFRETA can be programmed as you like.  Each VF has its own table so PF
config will not impact the VFs.  All it impacts is the local pool the PF is
using for its own traffic. So if you write it to all zeros that will set it
up for only one queue.  As long as none of the VFs exceed the number of
queues the VF has it shouldn't be a problem.

- Alex

On Friday, September 9, 2016, Ruslan Nikolaev <ruslan@purestorage.com>
wrote:

> Sorry in advance if I have any misconception regarding how VFRETA works.
> But I was more thinking about rss_i being smaller than needed. For
> instance, rss_i is 1 (because RSS=1 for PF) but we use VFs with larger
> number of RX queues (RSS=2 or RSS=4 for VF). Should we program the table
> for VFs using value 2 (or 4) that we at least support 2 RX queues (or 4 RX
> queues if interrupts are shared)? I guess, it does not matter when we use
> just one RX queue.
>
> Thanks,
> Ruslan
>
> On Sep 9, 2016, at 8:32 AM, Alexander Duyck <alexander.duyck@gmail.com
> <javascript:;>> wrote:
>
> > That shouldn't be needed since the VFRETA is actually per virtual pool
> > so it isn't shared with the VFs.  In theory we could support 3 queues
> > on the X550 without any issues, but that would be a change of how
> > things are currently handled so I figured I will leave it as it is for
> > now.
> >
> > - Alex
> >
> > On Thu, Sep 8, 2016 at 6:14 PM, Ruslan Nikolaev <ruslan@purestorage.com
> <javascript:;>> wrote:
> >> I still have on more question. There is also ixgbe_setup_vfreta
> function in
> >> the code. Do we need to adjust rss_i there as well?
> >>
> >> Thanks,
> >> Ruslan
> >>
> >> On Sep 8, 2016, at 11:20 AM, Ruslan Nikolaev <ruslan@purestorage.com
> <javascript:;>> wrote:
> >>
> >> Thank you very much for giving the feedback and creating new set of
> patches!
> >>
> >> Ruslan
> >>
> >> On Sep 7, 2016, at 8:28 PM, Alexander Duyck <alexander.duyck@gmail.com
> <javascript:;>>
> >> wrote:
> >>
> >> From: Alexander Duyck <alexander.h.duyck@intel.com <javascript:;>>
> >>
> >> Instead of limiting the VFs if we don't use 4 queues for RSS in the PF
> we
> >> can instead just limit the RSS queues used to a power of 2.  By doing
> this
> >> we can support use cases where VFs are using more queues than the PF is
> >> currently using and can support RSS if so desired.
> >>
> >> The only limitation on this is that we cannot support 3 queues of RSS in
> >> the PF or VF.  In either of these cases we should fall back to 2 queues
> in
> >> order to be able to use the power of 2 masking provided by the psrtype
> >> register.
> >>
> >> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com
> <javascript:;>>
> >> ---
> >> drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c  |    7 ++++---
> >> drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   12 +++++++-----
> >> 2 files changed, 11 insertions(+), 8 deletions(-)
> >>
> >> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> >> b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> >> index bcdc88444ceb..15ab337fd7ad 100644
> >> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> >> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
> >> @@ -515,15 +515,16 @@ static bool ixgbe_set_sriov_queues(struct
> >> ixgbe_adapter *adapter)
> >> vmdq_i = min_t(u16, IXGBE_MAX_VMDQ_INDICES, vmdq_i);
> >>
> >> /* 64 pool mode with 2 queues per pool */
> >> - if ((vmdq_i > 32) || (rss_i < 4) || (vmdq_i > 16 && pools)) {
> >> + if ((vmdq_i > 32) || (vmdq_i > 16 && pools)) {
> >> vmdq_m = IXGBE_82599_VMDQ_2Q_MASK;
> >> rss_m = IXGBE_RSS_2Q_MASK;
> >> rss_i = min_t(u16, rss_i, 2);
> >> - /* 32 pool mode with 4 queues per pool */
> >> + /* 32 pool mode with up to 4 queues per pool */
> >> } else {
> >> vmdq_m = IXGBE_82599_VMDQ_4Q_MASK;
> >> rss_m = IXGBE_RSS_4Q_MASK;
> >> - rss_i = 4;
> >> + /* We can support 4, 2, or 1 queues */
> >> + rss_i = (rss_i > 3) ? 4 : (rss_i > 1) ? 2 : 1;
> >> }
> >>
> >> #ifdef IXGBE_FCOE
> >> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> >> b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> >> index 1c888588cecd..a244d9a67264 100644
> >> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> >> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> >> @@ -3248,7 +3248,8 @@ static void ixgbe_setup_mtqc(struct ixgbe_adapter
> >> *adapter)
> >> mtqc |= IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ;
> >> else if (tcs > 1)
> >> mtqc |= IXGBE_MTQC_RT_ENA | IXGBE_MTQC_4TC_4TQ;
> >> - else if (adapter->ring_feature[RING_F_RSS].indices == 4)
> >> + else if (adapter->ring_feature[RING_F_VMDQ].mask ==
> >> + IXGBE_82599_VMDQ_4Q_MASK)
> >> mtqc |= IXGBE_MTQC_32VF;
> >> else
> >> mtqc |= IXGBE_MTQC_64VF;
> >> @@ -3475,12 +3476,12 @@ static void ixgbe_setup_reta(struct
> ixgbe_adapter
> >> *adapter)
> >> u32 reta_entries = ixgbe_rss_indir_tbl_entries(adapter);
> >> u16 rss_i = adapter->ring_feature[RING_F_RSS].indices;
> >>
> >> - /* Program table for at least 2 queues w/ SR-IOV so that VFs can
> >> + /* Program table for at least 4 queues w/ SR-IOV so that VFs can
> >> * make full use of any rings they may have.  We will use the
> >> * PSRTYPE register to control how many rings we use within the PF.
> >> */
> >> - if ((adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) && (rss_i < 2))
> >> - rss_i = 2;
> >> + if ((adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) && (rss_i < 4))
> >> + rss_i = 4;
> >>
> >> /* Fill out hash function seeds */
> >> for (i = 0; i < 10; i++)
> >> @@ -3544,7 +3545,8 @@ static void ixgbe_setup_mrqc(struct ixgbe_adapter
> >> *adapter)
> >> mrqc = IXGBE_MRQC_VMDQRT8TCEN; /* 8 TCs */
> >> else if (tcs > 1)
> >> mrqc = IXGBE_MRQC_VMDQRT4TCEN; /* 4 TCs */
> >> - else if (adapter->ring_feature[RING_F_RSS].indices == 4)
> >> + else if (adapter->ring_feature[RING_F_VMDQ].mask ==
> >> + IXGBE_82599_VMDQ_4Q_MASK)
> >> mrqc = IXGBE_MRQC_VMDQRSS32EN;
> >> else
> >> mrqc = IXGBE_MRQC_VMDQRSS64EN;
> >>
> >>
> >>
> >>
> >> _______________________________________________
> >> Intel-wired-lan mailing list
> >> Intel-wired-lan at lists.osuosl.org <javascript:;>
> >> http://lists.osuosl.org/mailman/listinfo/intel-wired-lan
> >>
>
> _______________________________________________
> Intel-wired-lan mailing list
> Intel-wired-lan at lists.osuosl.org <javascript:;>
> http://lists.osuosl.org/mailman/listinfo/intel-wired-lan
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osuosl.org/pipermail/intel-wired-lan/attachments/20160910/31b857d0/attachment-0001.html>

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2016-09-10 17:40 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-08  3:28 [Intel-wired-lan] [next PATCH 0/3] Add support for 4 queues VFs with 1 or 2 queue RSS on PF Alexander Duyck
2016-09-08  3:28 ` [Intel-wired-lan] [next PATCH 1/3] ixgbe: Allow setting multiple queues when SR-IOV is enabled Alexander Duyck
2016-09-09 18:43   ` Bowers, AndrewX
2016-09-08  3:28 ` [Intel-wired-lan] [next PATCH 2/3] ixgbe: Limit reporting of redirection table if " Alexander Duyck
2016-09-09 18:44   ` Bowers, AndrewX
2016-09-08  3:28 ` [Intel-wired-lan] [next PATCH 3/3] ixgbe: Support 4 queue RSS on VFs with 1 or 2 queue RSS on PF Alexander Duyck
2016-09-08 18:20   ` Ruslan Nikolaev
2016-09-09  1:14     ` Ruslan Nikolaev
2016-09-09 15:32       ` Alexander Duyck
2016-09-09 21:00         ` Ruslan Nikolaev
2016-09-10 17:40           ` Alexander Duyck
2016-09-09 18:46   ` Bowers, AndrewX

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.