* [Intel-wired-lan] Dear Wired Lan Experts, kindly offer your guidance on customer's input please
@ 2016-09-07 16:12 Jayakumar, Muthurajan
2016-09-07 18:29 ` Skidmore, Donald C
2016-09-08 1:47 ` Alexander Duyck
0 siblings, 2 replies; 3+ messages in thread
From: Jayakumar, Muthurajan @ 2016-09-07 16:12 UTC (permalink / raw)
To: intel-wired-lan
Dear Wired Lan Experts,
kindly offer your guidance on following customer's input below please
Much appreciated.
Best Regards,
M Jay
In the ixgbe driver (82599EB/X540/X550/X550EM_x), if RSS < 4 and we allocate VFs, the driver forces to use 2-queue (per VF) mode instead of 4-queue mode (see ixgbe_set_vmdq_queues function in ixgbe_lib.c)
Is there any fundamental reason to do so? Is it possible to still use 4-queue mode (per VF) even when RSS=1 (i.e., a physical function uses only 1 TX/RX queue but all VFs use 4 TX/RX queues)
The proposed change to use 4-queue mode is as follows (please double check if anything else needs to be changed):
diff -urN src/ixgbe_lib.c src_new/ixgbe_lib.c
--- src/ixgbe_lib.c 2016-07-12 13:11:56.976425563 -0700
+++ src_new/ixgbe_lib.c 2016-07-12 13:12:55.540425563 -0700
@@ -582,7 +582,7 @@
vmdq_i = min_t(u16, IXGBE_MAX_VMDQ_INDICES, vmdq_i);
/* 64 pool mode with 2 queues per pool */
- if ((vmdq_i > 32) || (rss_i < 4)) {
+ if (vmdq_i > 32) {
vmdq_m = IXGBE_82599_VMDQ_2Q_MASK;
rss_m = IXGBE_RSS_2Q_MASK;
rss_i = min_t(u16, rss_i, 2);
@@ -590,7 +590,7 @@
} else {
vmdq_m = IXGBE_82599_VMDQ_4Q_MASK;
rss_m = IXGBE_RSS_4Q_MASK;
- rss_i = 4;
+ rss_i = min_t(u16, rss_i, 4);
}
#if IS_ENABLED(CONFIG_FCOE)
diff -urN src/ixgbe_main.c src_new/ixgbe_main.c
--- src/ixgbe_main.c 2016-07-12 13:11:56.980425563 -0700
+++ src_new/ixgbe_main.c 2016-07-12 13:12:55.544425563 -0700
@@ -2883,7 +2883,7 @@
mtqc |= IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ;
else if (tcs > 1)
mtqc |= IXGBE_MTQC_RT_ENA | IXGBE_MTQC_4TC_4TQ;
- else if (adapter->ring_feature[RING_F_RSS].indices == 4)
+ else if (adapter->ring_feature[RING_F_VMDQ].mask == IXGBE_82599_VMDQ_4Q_MASK)
mtqc |= IXGBE_MTQC_32VF;
else
mtqc |= IXGBE_MTQC_64VF;
@@ -3186,13 +3186,12 @@
mrqc = IXGBE_MRQC_RSSEN;
} else {
u8 tcs = netdev_get_num_tc(adapter->netdev);
-
if (adapter->flags & IXGBE_FLAG_VMDQ_ENABLED) {
if (tcs > 4)
mrqc = IXGBE_MRQC_VMDQRT8TCEN; /* 8 TCs */
else if (tcs > 1)
mrqc = IXGBE_MRQC_VMDQRT4TCEN; /* 4 TCs */
- else if (adapter->ring_feature[RING_F_RSS].indices == 4)
+ else if (adapter->ring_feature[RING_F_VMDQ].mask == IXGBE_82599_VMDQ_4Q_MASK)
mrqc = IXGBE_MRQC_VMDQRSS32EN;
else
mrqc = IXGBE_MRQC_VMDQRSS64EN;
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osuosl.org/pipermail/intel-wired-lan/attachments/20160907/a2dde1cf/attachment.html>
^ permalink raw reply [flat|nested] 3+ messages in thread
* [Intel-wired-lan] Dear Wired Lan Experts, kindly offer your guidance on customer's input please
2016-09-07 16:12 [Intel-wired-lan] Dear Wired Lan Experts, kindly offer your guidance on customer's input please Jayakumar, Muthurajan
@ 2016-09-07 18:29 ` Skidmore, Donald C
2016-09-08 1:47 ` Alexander Duyck
1 sibling, 0 replies; 3+ messages in thread
From: Skidmore, Donald C @ 2016-09-07 18:29 UTC (permalink / raw)
To: intel-wired-lan
The limitation is due to interrupts limits of a VF. We only get four and since with two queues we need 3 interrupts (1 per queue and 1 for LSC among other events) we don't have enough to support a 4 queues VF and sharing interrupt would eliminate the advantages of the additional queues. :(
Thanks,
-Don <donald.c.skidmore@intel.com>
From: Intel-wired-lan [mailto:intel-wired-lan-bounces at lists.osuosl.org] On Behalf Of Jayakumar, Muthurajan
Sent: Wednesday, September 07, 2016 9:13 AM
To: intel-wired-lan@lists.osuosl.org; Blevins, Christopher R <christopher.r.blevins@intel.com>
Subject: [Intel-wired-lan] Dear Wired Lan Experts, kindly offer your guidance on customer's input please
Dear Wired Lan Experts,
kindly offer your guidance on following customer's input below please
Much appreciated.
Best Regards,
M Jay
In the ixgbe driver (82599EB/X540/X550/X550EM_x), if RSS < 4 and we allocate VFs, the driver forces to use 2-queue (per VF) mode instead of 4-queue mode (see ixgbe_set_vmdq_queues function in ixgbe_lib.c)
Is there any fundamental reason to do so? Is it possible to still use 4-queue mode (per VF) even when RSS=1 (i.e., a physical function uses only 1 TX/RX queue but all VFs use 4 TX/RX queues)
The proposed change to use 4-queue mode is as follows (please double check if anything else needs to be changed):
diff -urN src/ixgbe_lib.c src_new/ixgbe_lib.c
--- src/ixgbe_lib.c 2016-07-12 13:11:56.976425563 -0700
+++ src_new/ixgbe_lib.c 2016-07-12 13:12:55.540425563 -0700
@@ -582,7 +582,7 @@
vmdq_i = min_t(u16, IXGBE_MAX_VMDQ_INDICES, vmdq_i);
/* 64 pool mode with 2 queues per pool */
- if ((vmdq_i > 32) || (rss_i < 4)) {
+ if (vmdq_i > 32) {
vmdq_m = IXGBE_82599_VMDQ_2Q_MASK;
rss_m = IXGBE_RSS_2Q_MASK;
rss_i = min_t(u16, rss_i, 2);
@@ -590,7 +590,7 @@
} else {
vmdq_m = IXGBE_82599_VMDQ_4Q_MASK;
rss_m = IXGBE_RSS_4Q_MASK;
- rss_i = 4;
+ rss_i = min_t(u16, rss_i, 4);
}
#if IS_ENABLED(CONFIG_FCOE)
diff -urN src/ixgbe_main.c src_new/ixgbe_main.c
--- src/ixgbe_main.c 2016-07-12 13:11:56.980425563 -0700
+++ src_new/ixgbe_main.c 2016-07-12 13:12:55.544425563 -0700
@@ -2883,7 +2883,7 @@
mtqc |= IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ;
else if (tcs > 1)
mtqc |= IXGBE_MTQC_RT_ENA | IXGBE_MTQC_4TC_4TQ;
- else if (adapter->ring_feature[RING_F_RSS].indices == 4)
+ else if (adapter->ring_feature[RING_F_VMDQ].mask == IXGBE_82599_VMDQ_4Q_MASK)
mtqc |= IXGBE_MTQC_32VF;
else
mtqc |= IXGBE_MTQC_64VF;
@@ -3186,13 +3186,12 @@
mrqc = IXGBE_MRQC_RSSEN;
} else {
u8 tcs = netdev_get_num_tc(adapter->netdev);
-
if (adapter->flags & IXGBE_FLAG_VMDQ_ENABLED) {
if (tcs > 4)
mrqc = IXGBE_MRQC_VMDQRT8TCEN; /* 8 TCs */
else if (tcs > 1)
mrqc = IXGBE_MRQC_VMDQRT4TCEN; /* 4 TCs */
- else if (adapter->ring_feature[RING_F_RSS].indices == 4)
+ else if (adapter->ring_feature[RING_F_VMDQ].mask == IXGBE_82599_VMDQ_4Q_MASK)
mrqc = IXGBE_MRQC_VMDQRSS32EN;
else
mrqc = IXGBE_MRQC_VMDQRSS64EN;
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osuosl.org/pipermail/intel-wired-lan/attachments/20160907/dafab3a4/attachment-0001.html>
^ permalink raw reply [flat|nested] 3+ messages in thread
* [Intel-wired-lan] Dear Wired Lan Experts, kindly offer your guidance on customer's input please
2016-09-07 16:12 [Intel-wired-lan] Dear Wired Lan Experts, kindly offer your guidance on customer's input please Jayakumar, Muthurajan
2016-09-07 18:29 ` Skidmore, Donald C
@ 2016-09-08 1:47 ` Alexander Duyck
1 sibling, 0 replies; 3+ messages in thread
From: Alexander Duyck @ 2016-09-08 1:47 UTC (permalink / raw)
To: intel-wired-lan
On Wed, Sep 7, 2016 at 9:12 AM, Jayakumar, Muthurajan
<muthurajan.jayakumar@intel.com> wrote:
>
>
>
>
> Dear Wired Lan Experts,
>
> kindly offer your guidance on following customer's input below please
>
> Much appreciated.
>
>
>
> Best Regards,
>
> M Jay
>
>
>
>
>
>
>
> In the ixgbe driver (82599EB/X540/X550/X550EM_x), if RSS < 4 and we allocate
> VFs, the driver forces to use 2-queue (per VF) mode instead of 4-queue mode
> (see ixgbe_set_vmdq_queues function in ixgbe_lib.c)
> Is there any fundamental reason to do so? Is it possible to still use
> 4-queue mode (per VF) even when RSS=1 (i.e., a physical function uses only 1
> TX/RX queue but all VFs use 4 TX/RX queues)
>
> The proposed change to use 4-queue mode is as follows (please double check
> if anything else needs to be changed):
>
> diff -urN src/ixgbe_lib.c src_new/ixgbe_lib.c
> --- src/ixgbe_lib.c 2016-07-12 13:11:56.976425563 -0700
> +++ src_new/ixgbe_lib.c 2016-07-12 13:12:55.540425563 -0700
> @@ -582,7 +582,7 @@
> vmdq_i = min_t(u16, IXGBE_MAX_VMDQ_INDICES, vmdq_i);
>
> /* 64 pool mode with 2 queues per pool */
> - if ((vmdq_i > 32) || (rss_i < 4)) {
> + if (vmdq_i > 32) {
> vmdq_m = IXGBE_82599_VMDQ_2Q_MASK;
> rss_m = IXGBE_RSS_2Q_MASK;
> rss_i = min_t(u16, rss_i, 2);
This change is bogus and provides no value. The reason why we cap
things with the rss_i < 4 check is because if rss_i is 3 then the VFs
wouldn't be able to access the 4th queue because the redirection table
will have no value 4. There isn't much point in enabling 4 queues on
the VF if it cannot access them all.
> @@ -590,7 +590,7 @@
> } else {
> vmdq_m = IXGBE_82599_VMDQ_4Q_MASK;
> rss_m = IXGBE_RSS_4Q_MASK;
> - rss_i = 4;
> + rss_i = min_t(u16, rss_i, 4);
> }
>
This change would only make sense if the the first change was valid
which it isn't. It isn't worth it to allocate 4 queues per VF if they
can only access 3 because the PF hasn't populated the entries in the
redirection table.
> #if IS_ENABLED(CONFIG_FCOE)
> diff -urN src/ixgbe_main.c src_new/ixgbe_main.c
> --- src/ixgbe_main.c 2016-07-12 13:11:56.980425563 -0700
> +++ src_new/ixgbe_main.c 2016-07-12 13:12:55.544425563 -0700
> @@ -2883,7 +2883,7 @@
> mtqc |= IXGBE_MTQC_RT_ENA | IXGBE_MTQC_8TC_8TQ;
> else if (tcs > 1)
> mtqc |= IXGBE_MTQC_RT_ENA | IXGBE_MTQC_4TC_4TQ;
> - else if (adapter->ring_feature[RING_F_RSS].indices == 4)
> + else if (adapter->ring_feature[RING_F_VMDQ].mask ==
> IXGBE_82599_VMDQ_4Q_MASK)
> mtqc |= IXGBE_MTQC_32VF;
> else
> mtqc |= IXGBE_MTQC_64VF;
> @@ -3186,13 +3186,12 @@
> mrqc = IXGBE_MRQC_RSSEN;
> } else {
> u8 tcs = netdev_get_num_tc(adapter->netdev);
> -
> if (adapter->flags & IXGBE_FLAG_VMDQ_ENABLED) {
> if (tcs > 4)
> mrqc = IXGBE_MRQC_VMDQRT8TCEN; /* 8 TCs */
> else if (tcs > 1)
> mrqc = IXGBE_MRQC_VMDQRT4TCEN; /* 4 TCs */
> - else if (adapter->ring_feature[RING_F_RSS].indices
> == 4)
> + else if (adapter->ring_feature[RING_F_VMDQ].mask ==
> IXGBE_82599_VMDQ_4Q_MASK)
> mrqc = IXGBE_MRQC_VMDQRSS32EN;
> else
> mrqc = IXGBE_MRQC_VMDQRSS64EN;
This piece is valid. You might consider submitting it as a separate
patch if you would like. All it is really changing though is what
piece we are checking to determine if we have 4 queues per pool.
As far as enabling 4 queues in the VF it isn't very hard as long as
the PF is configured to use 4 queues. There are really only 3 changes
needed. I briefly tested the patch below and verified I can run 4
queues of RSS using a netperf TCP_CRR test. I'm sure it is going to
get white space mangled by my mail client, but this should give you
the general idea.
Author: Alexander Duyck <alexander.h.duyck@intel.com>
Date: Wed Sep 7 18:25:26 2016 -0700
ixgbevf: Add support for 4 queue RSS
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
index 7eaac3234049..0a36b6e37298 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
@@ -1655,6 +1655,8 @@ static void ixgbevf_setup_psrtype(struct
ixgbevf_adapter *adapter)
IXGBE_PSRTYPE_IPV4HDR | IXGBE_PSRTYPE_IPV6HDR |
IXGBE_PSRTYPE_L2HDR;
+ if (adapter->num_rx_queues > 3)
+ psrtype |= BIT(30);
if (adapter->num_rx_queues > 1)
psrtype |= BIT(29);
@@ -2000,6 +2002,12 @@ static int ixgbevf_configure_dcb(struct
ixgbevf_adapter *adapter)
/* we need as many queues as traffic classes */
num_rx_queues = num_tcs;
+ } else {
+ /* clamp RSS to no more than maximum queues */
+ if (num_tx_queues > hw->mac.max_tx_queues)
+ num_tx_queues = hw->mac.max_tx_queues;
+ if (num_rx_queues > hw->mac.max_rx_queues)
+ num_rx_queues = hw->mac.max_rx_queues;
}
/* if we have a bad config abort request queue reset */
@@ -2365,7 +2373,7 @@ static void ixgbevf_set_num_queues(struct
ixgbevf_adapter *adapter)
if (num_tcs > 1) {
adapter->num_rx_queues = num_tcs;
} else {
- u16 rss = min_t(u16, num_online_cpus(), IXGBEVF_MAX_RSS_QUEUES);
+ u16 rss = min_t(u16, num_online_cpus(), hw->mac.max_rx_queues);
switch (hw->api_version) {
case ixgbe_mbox_api_11:
^ permalink raw reply related [flat|nested] 3+ messages in thread
end of thread, other threads:[~2016-09-08 1:47 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-07 16:12 [Intel-wired-lan] Dear Wired Lan Experts, kindly offer your guidance on customer's input please Jayakumar, Muthurajan
2016-09-07 18:29 ` Skidmore, Donald C
2016-09-08 1:47 ` Alexander Duyck
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.