All of lore.kernel.org
 help / color / mirror / Atom feed
* mlx4: Bug in XDP_TX + 16 rx-queues
@ 2016-12-17 10:18 Martin KaFai Lau
  2016-12-18 10:31 ` Tariq Toukan
  2016-12-19 23:37 ` Martin KaFai Lau
  0 siblings, 2 replies; 9+ messages in thread
From: Martin KaFai Lau @ 2016-12-17 10:18 UTC (permalink / raw)
  To: Saeed Mahameed, Tariq Toukan; +Cc: netdev, Alexei Starovoitov

Hi All,

I have been debugging with XDP_TX and 16 rx-queues.

1) When 16 rx-queues is used and an XDP prog is doing XDP_TX,
it seems that the packet cannot be XDP_TX out if the pkt
is received from some particular CPUs (/rx-queues).

2) If 8 rx-queues is used, it does not have problem.

3) The 16 rx-queues problem also went away after reverting these
two patches:
15fca2c8eb41 net/mlx4_en: Add ethtool statistics for XDP cases
67f8b1dcb9ee net/mlx4_en: Refactor the XDP forwarding rings scheme

4) I can reproduce the problem by running samples/bof/xdp_ip_tunnel at
the receiver side.  The sender side sends out TCP packets with
source port ranging from 1 to 1024.  At the sender side also, do
a tcpdump to capture the ip-tunnel packet reflected by xdp_ip_tunnel.
With 8 rx-queues,  I can get all 1024 packets back.  With 16 rx-queues,
I can only get 512 packets back.  It is a 40 CPUs machine.
I also checked the rx*_xdp_tx counters (from ethtool -S eth0) to ensure
the xdp prog has XDP_TX-ed it out.

Not saying that 67f8b1dcb9ee is 100% the cause because there are other
changes since then.  It is merely a brain dump on what I have already
tried.

Tariq/Saeed, any thoughts?  I can easily test some patches in
my setup.

Thanks,
--Martin

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mlx4: Bug in XDP_TX + 16 rx-queues
  2016-12-17 10:18 mlx4: Bug in XDP_TX + 16 rx-queues Martin KaFai Lau
@ 2016-12-18 10:31 ` Tariq Toukan
  2016-12-18 18:14   ` Martin KaFai Lau
  2016-12-19 23:37 ` Martin KaFai Lau
  1 sibling, 1 reply; 9+ messages in thread
From: Tariq Toukan @ 2016-12-18 10:31 UTC (permalink / raw)
  To: Martin KaFai Lau, Saeed Mahameed, Tariq Toukan; +Cc: netdev, Alexei Starovoitov

Hi Martin,


On 17/12/2016 12:18 PM, Martin KaFai Lau wrote:
> Hi All,
>
> I have been debugging with XDP_TX and 16 rx-queues.
>
> 1) When 16 rx-queues is used and an XDP prog is doing XDP_TX,
> it seems that the packet cannot be XDP_TX out if the pkt
> is received from some particular CPUs (/rx-queues).
Does the rx_xdp_tx_full counter increase?
Does the problem repro if you turn off PFC?
     ethtool -A <intf> rx off tx off
>
> 2) If 8 rx-queues is used, it does not have problem.
>
> 3) The 16 rx-queues problem also went away after reverting these
> two patches:
> 15fca2c8eb41 net/mlx4_en: Add ethtool statistics for XDP cases
> 67f8b1dcb9ee net/mlx4_en: Refactor the XDP forwarding rings scheme
>
> 4) I can reproduce the problem by running samples/bof/xdp_ip_tunnel at
> the receiver side.  The sender side sends out TCP packets with
> source port ranging from 1 to 1024.  At the sender side also, do
> a tcpdump to capture the ip-tunnel packet reflected by xdp_ip_tunnel.
> With 8 rx-queues,  I can get all 1024 packets back.  With 16 rx-queues,
> I can only get 512 packets back.  It is a 40 CPUs machine.
> I also checked the rx*_xdp_tx counters (from ethtool -S eth0) to ensure
> the xdp prog has XDP_TX-ed it out.
So all packets were transmitted (according to rx*_xdp_tx), and only half 
the of them received on the other side?
>
> Not saying that 67f8b1dcb9ee is 100% the cause because there are other
> changes since then.  It is merely a brain dump on what I have already
> tried.
>
> Tariq/Saeed, any thoughts?  I can easily test some patches in
> my setup.
>
> Thanks,
> --Martin
Thanks,
Tariq

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mlx4: Bug in XDP_TX + 16 rx-queues
  2016-12-18 10:31 ` Tariq Toukan
@ 2016-12-18 18:14   ` Martin KaFai Lau
  0 siblings, 0 replies; 9+ messages in thread
From: Martin KaFai Lau @ 2016-12-18 18:14 UTC (permalink / raw)
  To: Tariq Toukan; +Cc: Saeed Mahameed, Tariq Toukan, netdev, Alexei Starovoitov

On Sun, Dec 18, 2016 at 12:31:30PM +0200, Tariq Toukan wrote:
> Hi Martin,
>
>
> On 17/12/2016 12:18 PM, Martin KaFai Lau wrote:
> >Hi All,
> >
> >I have been debugging with XDP_TX and 16 rx-queues.
> >
> >1) When 16 rx-queues is used and an XDP prog is doing XDP_TX,
> >it seems that the packet cannot be XDP_TX out if the pkt
> >is received from some particular CPUs (/rx-queues).
> Does the rx_xdp_tx_full counter increase?
The rx_xdp_tx_full counter did not increase.  A capture of
ethtool -S eth0:

[root@kerneltest003.14.prn2 ~]# ethtool -S eth0 | egrep 'rx.*_xdp_tx.*:'
rx_xdp_tx: 1024
rx_xdp_tx_full: 0
rx0_xdp_tx: 64
rx0_xdp_tx_full: 0
rx1_xdp_tx: 64
rx1_xdp_tx_full: 0
rx2_xdp_tx: 64
rx2_xdp_tx_full: 0
rx3_xdp_tx: 64
rx3_xdp_tx_full: 0
rx4_xdp_tx: 64
rx4_xdp_tx_full: 0
rx5_xdp_tx: 64
rx5_xdp_tx_full: 0
rx6_xdp_tx: 64
rx6_xdp_tx_full: 0
rx7_xdp_tx: 64
rx7_xdp_tx_full: 0
rx8_xdp_tx: 64
rx8_xdp_tx_full: 0
rx9_xdp_tx: 63
rx9_xdp_tx_full: 0
rx10_xdp_tx: 65
rx10_xdp_tx_full: 0
rx11_xdp_tx: 64
rx11_xdp_tx_full: 0
rx12_xdp_tx: 64
rx12_xdp_tx_full: 0
rx13_xdp_tx: 64
rx13_xdp_tx_full: 0
rx14_xdp_tx: 64
rx14_xdp_tx_full: 0
rx15_xdp_tx: 64
rx15_xdp_tx_full: 0

> Does the problem repro if you turn off PFC?
>     ethtool -A <intf> rx off tx off
Turning pause off does not help.

> >
> >2) If 8 rx-queues is used, it does not have problem.
> >
> >3) The 16 rx-queues problem also went away after reverting these
> >two patches:
> >15fca2c8eb41 net/mlx4_en: Add ethtool statistics for XDP cases
> >67f8b1dcb9ee net/mlx4_en: Refactor the XDP forwarding rings scheme
> >
> >4) I can reproduce the problem by running samples/bof/xdp_ip_tunnel at
> >the receiver side.  The sender side sends out TCP packets with
> >source port ranging from 1 to 1024.  At the sender side also, do
> >a tcpdump to capture the ip-tunnel packet reflected by xdp_ip_tunnel.
> >With 8 rx-queues,  I can get all 1024 packets back.  With 16 rx-queues,
> >I can only get 512 packets back.  It is a 40 CPUs machine.
> >I also checked the rx*_xdp_tx counters (from ethtool -S eth0) to ensure
> >the xdp prog has XDP_TX-ed it out.
> So all packets were transmitted (according to rx*_xdp_tx), and only half the
> of them received on the other side?
Correct.  The XDP program 'samples/bpf/xdp_tx_iptunnel' received,
processed and sent out 1024 packets.  The rx*_xdp_tx also showed all of the
1024 packets.  However, only half of them reached to the other side (by
observing the tcpdump) when 16 rx-queues was used.

Thanks,
--Martin

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mlx4: Bug in XDP_TX + 16 rx-queues
  2016-12-17 10:18 mlx4: Bug in XDP_TX + 16 rx-queues Martin KaFai Lau
  2016-12-18 10:31 ` Tariq Toukan
@ 2016-12-19 23:37 ` Martin KaFai Lau
  2016-12-20 12:02   ` Tariq Toukan
  1 sibling, 1 reply; 9+ messages in thread
From: Martin KaFai Lau @ 2016-12-19 23:37 UTC (permalink / raw)
  To: Tariq Toukan; +Cc: Saeed Mahameed, netdev, Alexei Starovoitov

Hi Tariq,

On Sat, Dec 17, 2016 at 02:18:03AM -0800, Martin KaFai Lau wrote:
> Hi All,
>
> I have been debugging with XDP_TX and 16 rx-queues.
>
> 1) When 16 rx-queues is used and an XDP prog is doing XDP_TX,
> it seems that the packet cannot be XDP_TX out if the pkt
> is received from some particular CPUs (/rx-queues).
>
> 2) If 8 rx-queues is used, it does not have problem.
>
> 3) The 16 rx-queues problem also went away after reverting these
> two patches:
> 15fca2c8eb41 net/mlx4_en: Add ethtool statistics for XDP cases
> 67f8b1dcb9ee net/mlx4_en: Refactor the XDP forwarding rings scheme
>
After taking a closer look at 67f8b1dcb9ee ("net/mlx4_en: Refactor the XDP forwarding rings scheme")
and armed with the fact that '>8 rx-queues does not work', I have
made the attached change that fixed the issue.

Making change in mlx4_en_fill_qp_context() could be an easier fix
but I think this change will be easier for discussion purpose.

I don't want to lie that I know anything about how this variable
works in CX3.  If this change makes sense, I can cook up a diff.
Otherwise, can you shed some light on what could be happening
and hopefully can lead to a diff?

Thanks
--Martin


diff --git i/drivers/net/ethernet/mellanox/mlx4/en_netdev.c w/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
index bcd955339058..b3bfb987e493 100644
--- i/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+++ w/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
@@ -1638,10 +1638,10 @@ int mlx4_en_start_port(struct net_device *dev)

 	/* Configure tx cq's and rings */
 	for (t = 0 ; t < MLX4_EN_NUM_TX_TYPES; t++) {
-		u8 num_tx_rings_p_up = t == TX ? priv->num_tx_rings_p_up : 1;
-
 		for (i = 0; i < priv->tx_ring_num[t]; i++) {
 			/* Configure cq */
+			int user_prio;
+
 			cq = priv->tx_cq[t][i];
 			err = mlx4_en_activate_cq(priv, cq, i);
 			if (err) {
@@ -1660,9 +1660,14 @@ int mlx4_en_start_port(struct net_device *dev)

 			/* Configure ring */
 			tx_ring = priv->tx_ring[t][i];
+			if (t != TX_XDP)
+				user_prio = i / priv->num_tx_rings_p_up;
+			else
+				user_prio = i & 0x07;
+
 			err = mlx4_en_activate_tx_ring(priv, tx_ring,
 						       cq->mcq.cqn,
-						       i / num_tx_rings_p_up);
+						       user_prio);
 			if (err) {
 				en_err(priv, "Failed allocating Tx ring\n");
 				mlx4_en_deactivate_cq(priv, cq);

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: mlx4: Bug in XDP_TX + 16 rx-queues
  2016-12-19 23:37 ` Martin KaFai Lau
@ 2016-12-20 12:02   ` Tariq Toukan
  2016-12-20 14:22     ` David Miller
  2016-12-20 18:31     ` Martin KaFai Lau
  0 siblings, 2 replies; 9+ messages in thread
From: Tariq Toukan @ 2016-12-20 12:02 UTC (permalink / raw)
  To: Martin KaFai Lau, Tariq Toukan; +Cc: Saeed Mahameed, netdev, Alexei Starovoitov

Thanks Martin, nice catch!


On 20/12/2016 1:37 AM, Martin KaFai Lau wrote:
> Hi Tariq,
>
> On Sat, Dec 17, 2016 at 02:18:03AM -0800, Martin KaFai Lau wrote:
>> Hi All,
>>
>> I have been debugging with XDP_TX and 16 rx-queues.
>>
>> 1) When 16 rx-queues is used and an XDP prog is doing XDP_TX,
>> it seems that the packet cannot be XDP_TX out if the pkt
>> is received from some particular CPUs (/rx-queues).
>>
>> 2) If 8 rx-queues is used, it does not have problem.
>>
>> 3) The 16 rx-queues problem also went away after reverting these
>> two patches:
>> 15fca2c8eb41 net/mlx4_en: Add ethtool statistics for XDP cases
>> 67f8b1dcb9ee net/mlx4_en: Refactor the XDP forwarding rings scheme
>>
> After taking a closer look at 67f8b1dcb9ee ("net/mlx4_en: Refactor the XDP forwarding rings scheme")
> and armed with the fact that '>8 rx-queues does not work', I have
> made the attached change that fixed the issue.
>
> Making change in mlx4_en_fill_qp_context() could be an easier fix
> but I think this change will be easier for discussion purpose.
>
> I don't want to lie that I know anything about how this variable
> works in CX3.  If this change makes sense, I can cook up a diff.
> Otherwise, can you shed some light on what could be happening
> and hopefully can lead to a diff?
>
> Thanks
> --Martin
>
>
> diff --git i/drivers/net/ethernet/mellanox/mlx4/en_netdev.c w/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
> index bcd955339058..b3bfb987e493 100644
> --- i/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
> +++ w/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
> @@ -1638,10 +1638,10 @@ int mlx4_en_start_port(struct net_device *dev)
>
>   	/* Configure tx cq's and rings */
>   	for (t = 0 ; t < MLX4_EN_NUM_TX_TYPES; t++) {
> -		u8 num_tx_rings_p_up = t == TX ? priv->num_tx_rings_p_up : 1;
The bug lies in this line.
Number of rings per UP in case of TX_XDP should be 
priv->tx_ring_num[TX_XDP ], not 1.
Please try the following fix.
I can prepare and send it once the window opens again.

diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c 
b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
index bcd955339058..edbe200ac2fa 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
@@ -1638,7 +1638,8 @@ int mlx4_en_start_port(struct net_device *dev)

         /* Configure tx cq's and rings */
         for (t = 0 ; t < MLX4_EN_NUM_TX_TYPES; t++) {
-               u8 num_tx_rings_p_up = t == TX ? priv->num_tx_rings_p_up 
: 1;
+               u8 num_tx_rings_p_up = t == TX ?
+                       priv->num_tx_rings_p_up : priv->tx_ring_num[t];

                 for (i = 0; i < priv->tx_ring_num[t]; i++) {
                         /* Configure cq */

> -
>   		for (i = 0; i < priv->tx_ring_num[t]; i++) {
>   			/* Configure cq */
> +			int user_prio;
> +
>   			cq = priv->tx_cq[t][i];
>   			err = mlx4_en_activate_cq(priv, cq, i);
>   			if (err) {
> @@ -1660,9 +1660,14 @@ int mlx4_en_start_port(struct net_device *dev)
>
>   			/* Configure ring */
>   			tx_ring = priv->tx_ring[t][i];
> +			if (t != TX_XDP)
> +				user_prio = i / priv->num_tx_rings_p_up;
> +			else
> +				user_prio = i & 0x07;
> +
>   			err = mlx4_en_activate_tx_ring(priv, tx_ring,
>   						       cq->mcq.cqn,
> -						       i / num_tx_rings_p_up);
> +						       user_prio);
>   			if (err) {
>   				en_err(priv, "Failed allocating Tx ring\n");
>   				mlx4_en_deactivate_cq(priv, cq);
Regards,
Tariq Toukan.

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: mlx4: Bug in XDP_TX + 16 rx-queues
  2016-12-20 12:02   ` Tariq Toukan
@ 2016-12-20 14:22     ` David Miller
  2016-12-21 16:37       ` Tariq Toukan
  2016-12-20 18:31     ` Martin KaFai Lau
  1 sibling, 1 reply; 9+ messages in thread
From: David Miller @ 2016-12-20 14:22 UTC (permalink / raw)
  To: ttoukan.linux; +Cc: kafai, tariqt, saeedm, netdev, ast

From: Tariq Toukan <ttoukan.linux@gmail.com>
Date: Tue, 20 Dec 2016 14:02:05 +0200

> I can prepare and send it once the window opens again.

Bug fixes like this can and should be sent at any time whatsoever.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mlx4: Bug in XDP_TX + 16 rx-queues
  2016-12-20 12:02   ` Tariq Toukan
  2016-12-20 14:22     ` David Miller
@ 2016-12-20 18:31     ` Martin KaFai Lau
  2016-12-21 16:47       ` Tariq Toukan
  1 sibling, 1 reply; 9+ messages in thread
From: Martin KaFai Lau @ 2016-12-20 18:31 UTC (permalink / raw)
  To: Tariq Toukan; +Cc: Tariq Toukan, Saeed Mahameed, netdev, Alexei Starovoitov

On Tue, Dec 20, 2016 at 02:02:05PM +0200, Tariq Toukan wrote:
> Thanks Martin, nice catch!
>
>
> On 20/12/2016 1:37 AM, Martin KaFai Lau wrote:
> >Hi Tariq,
> >
> >On Sat, Dec 17, 2016 at 02:18:03AM -0800, Martin KaFai Lau wrote:
> >>Hi All,
> >>
> >>I have been debugging with XDP_TX and 16 rx-queues.
> >>
> >>1) When 16 rx-queues is used and an XDP prog is doing XDP_TX,
> >>it seems that the packet cannot be XDP_TX out if the pkt
> >>is received from some particular CPUs (/rx-queues).
> >>
> >>2) If 8 rx-queues is used, it does not have problem.
> >>
> >>3) The 16 rx-queues problem also went away after reverting these
> >>two patches:
> >>15fca2c8eb41 net/mlx4_en: Add ethtool statistics for XDP cases
> >>67f8b1dcb9ee net/mlx4_en: Refactor the XDP forwarding rings scheme
> >>
> >After taking a closer look at 67f8b1dcb9ee ("net/mlx4_en: Refactor the XDP forwarding rings scheme")
> >and armed with the fact that '>8 rx-queues does not work', I have
> >made the attached change that fixed the issue.
> >
> >Making change in mlx4_en_fill_qp_context() could be an easier fix
> >but I think this change will be easier for discussion purpose.
> >
> >I don't want to lie that I know anything about how this variable
> >works in CX3.  If this change makes sense, I can cook up a diff.
> >Otherwise, can you shed some light on what could be happening
> >and hopefully can lead to a diff?
> >
> >Thanks
> >--Martin
> >
> >
> >diff --git i/drivers/net/ethernet/mellanox/mlx4/en_netdev.c w/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
> >index bcd955339058..b3bfb987e493 100644
> >--- i/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
> >+++ w/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
> >@@ -1638,10 +1638,10 @@ int mlx4_en_start_port(struct net_device *dev)
> >
> >  	/* Configure tx cq's and rings */
> >  	for (t = 0 ; t < MLX4_EN_NUM_TX_TYPES; t++) {
> >-		u8 num_tx_rings_p_up = t == TX ? priv->num_tx_rings_p_up : 1;
> The bug lies in this line.
> Number of rings per UP in case of TX_XDP should be priv->tx_ring_num[TX_XDP
> ], not 1.
> Please try the following fix.
> I can prepare and send it once the window opens again.
>
> diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
> b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
> index bcd955339058..edbe200ac2fa 100644
> --- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
> +++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
> @@ -1638,7 +1638,8 @@ int mlx4_en_start_port(struct net_device *dev)
>
>         /* Configure tx cq's and rings */
>         for (t = 0 ; t < MLX4_EN_NUM_TX_TYPES; t++) {
> -               u8 num_tx_rings_p_up = t == TX ? priv->num_tx_rings_p_up :
> 1;
> +               u8 num_tx_rings_p_up = t == TX ?
> +                       priv->num_tx_rings_p_up : priv->tx_ring_num[t];
>
>                 for (i = 0; i < priv->tx_ring_num[t]; i++) {
>                         /* Configure cq */
>
Thanks for confirming the bug is related to the user_prio argument.

I have some questions:

1. Just to confirm the intention of the change.  Your change is essentially
   always passing 0 to the user_prio parameter for the TX_XDP type by
   doing (i / priv->tx_ring_num[t])?  If yes, would it be clearer to
   always pass 0 instead?

   And yes, it also works in our test.  Please post an offical patch
   if it is the fix.

2. Can you explain a little on how does the user_prio affect
   the tx behavior?  e.g. What is the difference between
   different user_prio like 0, 1, 2...etc?

3. Mostly a follow up on (2).
   In mlx4_en_get_profile(), num_tx_rings_p_up (of the struct mlx4_en_profile)
   depends on mlx4_low_memory_profile() and number of cpu.  Does these
   similar bounds apply to the 'u8 num_tx_rings_p_up' here for
   TX_XDP type?

Thanks,
Martin

> >-
> >  		for (i = 0; i < priv->tx_ring_num[t]; i++) {
> >  			/* Configure cq */
> >+			int user_prio;
> >+
> >  			cq = priv->tx_cq[t][i];
> >  			err = mlx4_en_activate_cq(priv, cq, i);
> >  			if (err) {
> >@@ -1660,9 +1660,14 @@ int mlx4_en_start_port(struct net_device *dev)
> >
> >  			/* Configure ring */
> >  			tx_ring = priv->tx_ring[t][i];
> >+			if (t != TX_XDP)
> >+				user_prio = i / priv->num_tx_rings_p_up;
> >+			else
> >+				user_prio = i & 0x07;
> >+
> >  			err = mlx4_en_activate_tx_ring(priv, tx_ring,
> >  						       cq->mcq.cqn,
> >-						       i / num_tx_rings_p_up);
> >+						       user_prio);
> >  			if (err) {
> >  				en_err(priv, "Failed allocating Tx ring\n");
> >  				mlx4_en_deactivate_cq(priv, cq);
> Regards,
> Tariq Toukan.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mlx4: Bug in XDP_TX + 16 rx-queues
  2016-12-20 14:22     ` David Miller
@ 2016-12-21 16:37       ` Tariq Toukan
  0 siblings, 0 replies; 9+ messages in thread
From: Tariq Toukan @ 2016-12-21 16:37 UTC (permalink / raw)
  To: David Miller; +Cc: kafai, tariqt, saeedm, netdev, ast


On 20/12/2016 4:22 PM, David Miller wrote:
> From: Tariq Toukan <ttoukan.linux@gmail.com>
> Date: Tue, 20 Dec 2016 14:02:05 +0200
>
>> I can prepare and send it once the window opens again.
> Bug fixes like this can and should be sent at any time whatsoever.
Thanks Dave, I will verify my fix and send it.

Regards,
Tariq

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: mlx4: Bug in XDP_TX + 16 rx-queues
  2016-12-20 18:31     ` Martin KaFai Lau
@ 2016-12-21 16:47       ` Tariq Toukan
  0 siblings, 0 replies; 9+ messages in thread
From: Tariq Toukan @ 2016-12-21 16:47 UTC (permalink / raw)
  To: Martin KaFai Lau; +Cc: Tariq Toukan, Saeed Mahameed, netdev, Alexei Starovoitov



On 20/12/2016 8:31 PM, Martin KaFai Lau wrote:
> On Tue, Dec 20, 2016 at 02:02:05PM +0200, Tariq Toukan wrote:
>> Thanks Martin, nice catch!
>>
>>
>> On 20/12/2016 1:37 AM, Martin KaFai Lau wrote:
>>> Hi Tariq,
>>>
>>> On Sat, Dec 17, 2016 at 02:18:03AM -0800, Martin KaFai Lau wrote:
>>>> Hi All,
>>>>
>>>> I have been debugging with XDP_TX and 16 rx-queues.
>>>>
>>>> 1) When 16 rx-queues is used and an XDP prog is doing XDP_TX,
>>>> it seems that the packet cannot be XDP_TX out if the pkt
>>>> is received from some particular CPUs (/rx-queues).
>>>>
>>>> 2) If 8 rx-queues is used, it does not have problem.
>>>>
>>>> 3) The 16 rx-queues problem also went away after reverting these
>>>> two patches:
>>>> 15fca2c8eb41 net/mlx4_en: Add ethtool statistics for XDP cases
>>>> 67f8b1dcb9ee net/mlx4_en: Refactor the XDP forwarding rings scheme
>>>>
>>> After taking a closer look at 67f8b1dcb9ee ("net/mlx4_en: Refactor the XDP forwarding rings scheme")
>>> and armed with the fact that '>8 rx-queues does not work', I have
>>> made the attached change that fixed the issue.
>>>
>>> Making change in mlx4_en_fill_qp_context() could be an easier fix
>>> but I think this change will be easier for discussion purpose.
>>>
>>> I don't want to lie that I know anything about how this variable
>>> works in CX3.  If this change makes sense, I can cook up a diff.
>>> Otherwise, can you shed some light on what could be happening
>>> and hopefully can lead to a diff?
>>>
>>> Thanks
>>> --Martin
>>>
>>>
>>> diff --git i/drivers/net/ethernet/mellanox/mlx4/en_netdev.c w/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
>>> index bcd955339058..b3bfb987e493 100644
>>> --- i/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
>>> +++ w/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
>>> @@ -1638,10 +1638,10 @@ int mlx4_en_start_port(struct net_device *dev)
>>>
>>>   	/* Configure tx cq's and rings */
>>>   	for (t = 0 ; t < MLX4_EN_NUM_TX_TYPES; t++) {
>>> -		u8 num_tx_rings_p_up = t == TX ? priv->num_tx_rings_p_up : 1;
>> The bug lies in this line.
>> Number of rings per UP in case of TX_XDP should be priv->tx_ring_num[TX_XDP
>> ], not 1.
>> Please try the following fix.
>> I can prepare and send it once the window opens again.
>>
>> diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
>> b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
>> index bcd955339058..edbe200ac2fa 100644
>> --- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
>> +++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
>> @@ -1638,7 +1638,8 @@ int mlx4_en_start_port(struct net_device *dev)
>>
>>          /* Configure tx cq's and rings */
>>          for (t = 0 ; t < MLX4_EN_NUM_TX_TYPES; t++) {
>> -               u8 num_tx_rings_p_up = t == TX ? priv->num_tx_rings_p_up :
>> 1;
>> +               u8 num_tx_rings_p_up = t == TX ?
>> +                       priv->num_tx_rings_p_up : priv->tx_ring_num[t];
>>
>>                  for (i = 0; i < priv->tx_ring_num[t]; i++) {
>>                          /* Configure cq */
>>
> Thanks for confirming the bug is related to the user_prio argument.
>
> I have some questions:
>
> 1. Just to confirm the intention of the change.  Your change is essentially
>     always passing 0 to the user_prio parameter for the TX_XDP type by
>     doing (i / priv->tx_ring_num[t])?
Yes
>    If yes, would it be clearer to
>     always pass 0 instead?
I think that it should be implied by num_tx_rings_p_up, it simply hadthe 
wrong value.
>
>     And yes, it also works in our test.  Please post an offical patch
>     if it is the fix.
Sure.
>
> 2. Can you explain a little on how does the user_prio affect
>     the tx behavior?  e.g. What is the difference between
>     different user_prio like 0, 1, 2...etc?
An implementation of QoS (Quality of Service).
This would tell the HW to prioritize between different send queues.
In XDP forward we use a single user prio, 0 (default).
>
> 3. Mostly a follow up on (2).
>     In mlx4_en_get_profile(), num_tx_rings_p_up (of the struct mlx4_en_profile)
>     depends on mlx4_low_memory_profile() and number of cpu.  Does these
>     similar bounds apply to the 'u8 num_tx_rings_p_up' here for
>     TX_XDP type?
No.
The number of XDP_TX rings is equal to the number of RX rings.
>
> Thanks,
> Martin
Regards,
Tariq
>>> -
>>>   		for (i = 0; i < priv->tx_ring_num[t]; i++) {
>>>   			/* Configure cq */
>>> +			int user_prio;
>>> +
>>>   			cq = priv->tx_cq[t][i];
>>>   			err = mlx4_en_activate_cq(priv, cq, i);
>>>   			if (err) {
>>> @@ -1660,9 +1660,14 @@ int mlx4_en_start_port(struct net_device *dev)
>>>
>>>   			/* Configure ring */
>>>   			tx_ring = priv->tx_ring[t][i];
>>> +			if (t != TX_XDP)
>>> +				user_prio = i / priv->num_tx_rings_p_up;
>>> +			else
>>> +				user_prio = i & 0x07;
>>> +
>>>   			err = mlx4_en_activate_tx_ring(priv, tx_ring,
>>>   						       cq->mcq.cqn,
>>> -						       i / num_tx_rings_p_up);
>>> +						       user_prio);
>>>   			if (err) {
>>>   				en_err(priv, "Failed allocating Tx ring\n");
>>>   				mlx4_en_deactivate_cq(priv, cq);
>> Regards,
>> Tariq Toukan.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2016-12-21 16:47 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-17 10:18 mlx4: Bug in XDP_TX + 16 rx-queues Martin KaFai Lau
2016-12-18 10:31 ` Tariq Toukan
2016-12-18 18:14   ` Martin KaFai Lau
2016-12-19 23:37 ` Martin KaFai Lau
2016-12-20 12:02   ` Tariq Toukan
2016-12-20 14:22     ` David Miller
2016-12-21 16:37       ` Tariq Toukan
2016-12-20 18:31     ` Martin KaFai Lau
2016-12-21 16:47       ` Tariq Toukan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.